Royal Society Athena Prize for Equality

The inaugural Royal Society Athena Prize 2016 recognises individuals and teams in the UK research community who have contributed towards the advancement of diversity in science, technology, engineering and mathematics (STEM) in their institutions and organisations.

This year I am really thrilled to say my Research Centre, was awarded a runners up prize!You can read more on this here.

The Centre was cited as ‘acting as a role model for inclusiveness by promoting good practice and addressing cultural barriers both nationally and internationally’

The Royal Society Athena Prize 2016, comprising a medal and a £5k gift, was awarded to the London Mathematics Society’s Women in Mathematics Committee, recognised for introducing a broad range of initiatives in the field of mathematics resulting in a change of culture that has happened nationwide in mathematics.

These initiatives are important to draw attention and effort from the scientific community towards equality and diversity.

If only there was a prize for equal pay too!

Advertisements

Repeat, repeat, repeat

Today while I was reading up articles on solar cells I came across something that touched a real chord with me:

In the September 2014 issue of Nature Photonics, Zimmermann et al. had a commentary piece titled “Erroneous efficiency reports harm organic solar cell research” on page 669.

The authors commented that mischaracterization or solar cell power conversion efficiencies and inconsistent data being published in scientific journals (in the field of solar cells) was particularly harmful for the area. The race for getting the best results and publishing them in the journals with highest impact factor, has in part led to people being less careful about incorrect measurements and poor reporting.

The danger when such articles multiply and proliferate is that the data being reported is unreliable and one doesn’t know which data/papers to trust. The progress of the field as a whole is hampered.

Having data and results that can be trusted, repeated and verified is a must for scientific research. In some cases, the methods to be used for characterization are clearly laid out and researchers can follow these, and/or conduct standardized tests/measurements to show the veracity of their results. This instills confidence in readers about the work and should positively impact the citation of the work too.

Obviously such issues are not confined to one field alone. For numerical modeling as I have said in previous posts, benchmarking results of a new technique against existing test cases/analytical solutions is a must!

The sheer number of the papers that were reporting results which overestimated performance though was quite a shock!

I think from now on I am going to be even more rigorous about my results as well as those of the papers I review/edit!

Working away

As some of my posts indicate there are times when I feel as buried under work as anyone. And I am sure you can attest to feeling the same:
– not enough hours to finish the work waiting
– thinking about work while you should be engrossed in family, friends
– feeling stressed and perhaps even low confidence because it seems you cant cope

These things are increasingly happening to academics. In the UK, the employment contracts many of us sign do not stipulate maximum working hours or exact duties. Instead we are meant to perform the taks considered necessary/relevant to our role by our employer. the time we spend performing these tasks is also in some ways implicitly decided by our employer.

Where does this leave us if we are over burdened?

The situation seems the same in every university: (junior staff especially) academic staff toiling away to teach several courses, while trying to establish themselves as research supestars. Then there is the admin work…In order to cut costs, staff are fired and the work redistributed amongst fewer staff members, who somehow are supposed to do more with less (less time, less rest, less resources).
Apply for grants, publish in top journals, get excellent teaching scores from students, publish a book, do the admin… the list is endless.

In trying to achieve these targets we put in ever increasing hours. Holidays are of shorter duration every year and some how the work laptop finds a way ito come on even during weekends, late nights and holidays.

There is no overtime money from the employer for any of this. No one from the Higher Education authority or any emplyment body/union to hold these employers to account.

Now this may seem like a rant and a moan (it is). But it also more.

It is upto us to not fall into this trap. While we may not all be able to walk up with our resignation lettter and walk into a better job, we can assert some control over our minds.

Our anxiety at not being able to do as much as we think is needed, is perhaps our biggest problem. If we can step back from the situation and the anxiety to objectively view things, we can make better decisions about our goals and how to achieve them.

I read an article recently which I found quite useful: http://www.timeshighereducation.co.uk/features/feature-work-less-do-more-live-better/2014929.article

In my view we need and deserve to be treated more fairly by employers, a complete change in attitude is needed in educations. While we work towards this, we can also make changes in our individual lifestyles, thinking and perhaps self-management to help ourselves.

International Womens day

International womens’ day comes about in March every year and much like many years there has been a bit of hype around it. The occasion is used by various womens’ organisations, policy makers, governments, to raise awareness of issues connected to women. The media is an important component in this ever growing do. And commerce is never far behind in exploiting every possible opportunity (behold the offers to women in shops: shop for more than x amount and get 10% discount. Never mind taht the amount you need to purchase is huge and the discount is measly)!cartoon on womens equality

For the scientific community does this day have relevance?
The answer is yes. The so called STEM (Science, Technology, Engineering and Mathematics) fields historically right up to this day have a massive under representation of women. Plenty has been written about this gap and for some years there have been attempts to address it. Groups like MWOSA are testament to this.

In this article I want to address what I see happening in the future that can make a positive change as well as what all of us can do.

Celebrating female successes and creating role models is becoming a big part of the prevalent thinking. We will see more and more that groups (MWOSA, IEEE Women in Photonics, Women in Physics) and many others bring to the fore the achievements of their female members (through award ceremonies, magazine articles) . This explicit recognition and celebration of successful women scientists will also go towards showcasing them as role models to younger women and girls. For example, check out the We the Geeks Google Hangout series at the White House which celebrate some very cool women role models.

Womens’ networks are getting a boost. The power of networks in helping members make connections (to get that job or promotion or new project) is widely recognised. Traditionally women tend to have narrow but deep networks (compared to male counterparts who on average have wide but shallow networks) and may often hesitate to ask for help unless they know a person very well. Increased training and awareness in all female networks are catering to some of the specific behavioural styles women have.

Is there something that we can do as individuals?

Research has shown that women are also prone to unconscious bias like men. Therefore when it comes to interviewing candidates, peer reviewing proposals and papers, women and men, both unconsciously (where direct prejudice is absent) tend to favour male candidates. Even when the gender is unknown, a name that seems “male” tends to get higher approval. Our understanding of unconscious bias is now better.

So one thing that each one of us can do is to introspect and perhaps take tests like (the Implicit Association Test) to check our own tendency towards unconscious bias and eliminate it.

Another perhaps an even more powerful strength we all have is our voices. As members of OSA and other technical bodies we can volunteer in outreach efforts to young girls, it is possible to act as mentors to younger members, and also to ask the society to prioritise equality in its policy.

Many large corporations and businesses now train their recruitment managers on unconscious bias and treat it as a serious issue. They do not want to lose good talent because of such bias. In addition there has been discussion on creating quotas for women in boards of businesses. Some countries like Norway have implemented it while in others targets have been set for businesses. The point is that the business world and policy makers are addressing the under representation of women at the top level. Talent and ability are just as important here as in STEM, so the solutions being looked at do not compromise on quality.

Scientific bodies, research institutions and higher education bodies have not yet set targets (for female representation) or openly discussed quotas. Perhaps these can be thought of in different forms: gender balanced editorial boards for journals, conference committees etc. As members we can contribute to this debate and bring it to centre stage.

OSA and IEEE Photonics are in many ways trend setters: with OSA CEO Elizabeth Rogan and the immediate past president of OSA, Donna Strickland, Dalma Novak the President of the IEEE Photonics Society all being female, this sends a powerful message to all the young women in Optics: you can get to the top.

the machinery for (research) money

Yesterday crystallised a piece of knoweldge for me: universities now employ consultants (and regular employees) who help academics and researchers write grant applications. Some of these consultants, consult specifically for EU (FP7 and now Horizon2020) grant applications. They may be paid on a per grant basis or a regular salary.

The service they provide is help select the best ideas for grant applications (which they think) could be successful and help academics write and polish these documents. So these consultants who may also have a technical background are not active researchers or part of the proposed projects, their role is to maximise the chances of success in the cut throat research funding environment.

Having seen (and applied) for an EU grant or two, I dont think I would be alone in saying that:
– the calls for applications are long and dense, impenetrable and dont seem written for or by scientists
– the paperwork involved is massive and offputting
– success rates are low

These factors have led to universities employing consultants to somehow overcome the first two obstacles. I wonder if its just me or the opacity and unneccesary dificulty in applying (to the EU), is diverting funds from universities and research bodies to non-research and non-academic activities?
How did it come to this?

You, me and Science

Two recent articles have got my blood racing and the excitement has led to this post. Both relate to citizen science, a concept that involves the common man in science and making big science accessible to everyone.

And how does citizen science work?

One example is the Ardusat satellites (and such like) which are tiny satellites on which time can be hired by the high school student, the amateur astronomer, the layman virtually! These carry simple equipment like temperature sensors, Geiger counters, digital cameras and the like. For costs of $35-45 per day people can hire time (in blocks of a few days) on these to perform experiments in space, take photographs of Earth/celestial objects from space…and much more. These firms such as NanoSatisfi mostly raised money through crowd sourcing but have made it possible for everyone to work with a slice of space!

The second example is just as exciting because the possibilities are endless! The GalaxyZoo project launched in 2007 by Chris Lintott and Kevin Schawinski asked volunteers to classify galaxies as spiral or other shapes. By their estimation the data they had collected from the Sloan Digital Sky survey, about a million images of galaxies would take years to sort through. Machine algorithms are still not as efficient as humans at recognising shapes. They reckoned that they would get 50 odd volunteers and finish the work in a year and a half. Instead thousands of volunteers from all over the world trawled through the data in 3 weeks!

The amazing thing about this project is that it allows the average Joe and Jill to do big science, to connect with big projects and be part of the romance of Science even if they are not professional scientists. It cuts across age, race, culture, gender, profession… it brings people together in their love and wonder of the natural world.
So now the question is: can we design other studies and experiments using this concept of citizen science and solve big problems, not just in astronomy but in all disciplines (projects that take forward the zooniverse principle)? Imagine the power of harnessing the talent and effort of hundreds of thousands of people who enjoy the discipline (even if they are amateurs). This diversity in thought and experience enriches the work so much, while sparking interest and a common sense of purpose amongst so many people.

Countries like China, India, Brazil (and others) would do especially well to engage the millions of people who could contribute and may be this can even help bring down the costs of certain kinds of research!

This much I know: I am itching to create a project like this of my own!

The Browne review of higher education

The Browne report on Higher Education (HE) was published about 3 years ago. Some of its recommendations have been implemented and we are now beginning to feel the changes. So it may seem odd that here I choose not to examine how the recommended changes are faring. Instead I want to look at the basic premise of the review.

Why?

I expect in the future we may see more reviews and more changes. It is vital to get the basis on which we base our decisions right. Also, the recommended £9000 tuition fees cap is now being deemed as too low to be sustainable by some in higher education.
The remit of the review was restricted to examine the teaching provision across institutions of higher education in England and Wales. Research was excluded.

This leads to the first fundamental issue with the review.

An inappropriate business/economic model: In the present era almost all universities carry out research, teaching, policy, outreach, consultancy work, disseminating knowledge that filters into society and sets the context for discourse in a modern society etc., in other words there are multiple tasks and outputs connected to Higher Education Institutes (HEIs). The economic model/logic implicit in the review only considers one output: teaching. All other outputs and business including research are ignored.

However economic studies on principal-agent modelling have shown that where there are several tasks performed by an agent (some of which are less measurable than others) it is inefficient to base performance-based rewards and incentives only on the measurable output. It leads to agents prioritising this output and neglecting the others that may be equally or even more important. It can also lead to agents skewing metrics to show attainment. This view is supported by another paper(1) on teaching performance and see footnote*. Such happenstance can harm the very output that the review aims to improve: teaching provision.

The Browne review hoped to use market forces (students choose the best value for money HEI) as a way to stimulate across the sector improvement in teaching, better access to students, and lowering the outlay from the public purse towards HE. However, since teaching is not the only output of the HEIs and the other outputs are not so easily measurable (by way of student preference for a course at an institution), it is flawed at its very heart in being able to achieve its stated objectives. Indeed, it may well lead to other less beneficial changes in the long run as well: risking the quality of research output of HEIs, all universities raising their tuition fees to the capped value, courses that have lower market appeal (as perceived by students) disappearing and an overcrowding in some courses.

The second flaw in the review is ignoring the nature of the HE market and using the wrong indicator to measure teaching performance. Several factors determine the reputation and desirability of an HEI as a place to undertake taught study. Paradoxically research is chief amongst these. Students often use the reputation of an institution as a key metric in choosing where to study (Russell group universities are often the aspiration for students because in the job market the university brand name carries weight), though this reputation can be more directly linked to research than teaching. If we consider student satisfaction as a metric and look at the results of National Student Survey, we find that for identical courses offered by several universities, in many cases Russell group universities score lower than smaller and less research intensive counterparts. Yet, consistently year on year the places are filled at the Russell group universities faster.

The underlying assumption in the report that students choose to study at an HEI for its teaching alone or primarily is not correct. Therefore student choice for some HEIs cannot be used as a reliable indicator of the teaching excellence of the HEIs; it is in fact not the right indicator or metric for the economic model the review employs. A more deep understanding of the nature of relationships at play in the HE market is needed to hew the necessary measurement instruments.

The third basic issue lies in the approach when commissioning the report. Universities and HEIs are similar to businesses in some aspects, but are not by their nature a business in the same way as a shareholder owned enterprise. The stakeholder distribution for the two is not identical and therefore using the lens of free market economics to look at HE is not right.

Businesses exist to provide a profit to the shareholder/owners through providing service to customers. Profit is the chief concern and may at occasions even be at odds with customer concerns. On the other hand, HEIs have several stakeholders: students, research councils, companies, charities and other funding bodies, the larger society. The primary concern for HEIs is rarely monetary profit. Most HEIs are also charitable institutions. Therefore, it makes little sense to apply a treatment (that works for commercial businesses) to HEIs when the two are completely distinct species of institutions. Some may say this is the foremost argument against the way this review was conducted.

Any future reviews would I hope address the totality of the causes and effects that play a part in shaping higher education.

Footnotes and references
*As a concrete illustration of the distortions that testing can cause, in 1989 a ninth-grade teacher in Greenville, South Carolina was caught having passed answers to questions on the statewide tests of basic skills to students in her geography classes in order to improve her performance rating (Wall Street Journal, November 2, 1989
1. Hannaway, Jane. 1991. “Higher Order Thinking, Job Design and Incentives: An Analysis and Proposal,” American Education Research Journal (forthcoming).

To space or not to space

India is launching is its first Mars mission (the Mars Orbiter mission) tomorrow, the 5th of November 2013. What got me writing this blog post (apart from my love of space-related stuff, national pride and the hope of something as ground breaking as the discovery of water on the moon that came about in India’s first mission to the moon, Chandrayaan 1) was the presence of the usual ‘but they don’t have enough toilets/schools/roads/electricity/hospitals….so why do they need a space programme’ sort of comments.

This isn’t new so why am I bothering with a blogpost about this?

I do not pretend to have the credentials to judge the merits of this Mars mission. The mission may be brilliant or may be ill conceived. Instead what I want to address here is the perpetual “if you can’t solve all the problems of your country how can you think of a space mission” brand of criticism.

All those who raise these questions are not all wrong and many are extremely well meaning. We need more open debate and discussion and I present my views here.

My response to the critics is this:

Space and related technologies lead to direct massive socio-economic benefits. Take as examples:

– the weather monitoring satellites launched in the 90s and first decade of this millennium that are used to monitor and predict weather patterns- monsoon, hurricanes, cyclones and the like. Without these we would suffer more crop damage, poverty, loss of life in natural disasters such as the cyclone Phaillin where fewer than 100 people died (compare this to 10,000 deaths in the 1991 cyclone that hit Eastern India).

– Consider how the telecommunication satellites have connected huge swathes of India including rural populations. This contrasts with the India where getting a landline telephone connection could take months and was the preserve of middle class and rich people. Today even a rickshaw puller can own a mobile phone. The ability to communicate has implications for social equality as well as economic prosperity.

– A few decades ago there were critics who saw no point in developing expensive programs to indigenously develop satellites and launch them. Yet today India can make satellites that it needs and launch them, all at a fraction of the cost that is average for the global satellite market. This has given India a commercial competitive edge in the global satellite market making the space program a net earner in the budget. It was possible only through hard scientific graft and this kind of know-how is not given free by more advanced nations. You have to earn it!

Poor countries with large populations (usually with illiteracy being a further problem) have economies that depend millions working jobs at the low value end of the economic chain. The share of global trade value is low and the net addition to product value is low. Therefore scientific projects are needed that instigate technological development, cause industries (albeit slowly and starting small) to grow which are at the high value end, slowly moving the country and its economy towards development. Otherwise these countries risk remaining stagnant as pools of cheap manual labour in the global economy.

One can’t just send a probe to Mars one day on a whim. It takes a huge concerted effort and development of many other ancillary technologies to achieve such a dream. Each bit of scientific advancement has a ripple effect in improving a vast array of products and services for the people. Many of these would either not happen as quickly or as effectively without such ambitious missions. These missions also need trained and capable technical personnel spurring technical education.

Inspiring a people and allowing them to dream. I dreamt of being an astronaut as a young girl in India, but knew I would have to go to the USA or a European country to make that dream come true. Eventually that dream died. The pain has never left me. However, with each such project, the generation of young people in India can dream and hope to even fulfill these dreams in India. This has incredible power- inspiring bright and talented young people to study Science, Engineering and Technology. They may not all eventually sit in a rocket, but they may well be setting up the next Infosys, invent the next Bose speakers, help find life in space, solve the energy crisis,find a cure for cancer…the possibilities are endless.

Progress has to be holistic. For genuine progress, social upliftment and eradication of poverty and inequality there has to be development in all spheres. Ignoring advanced science and technology till there is 0 poverty, 0 illiteracy, 0 child malnutrition, etc. we may never get to any target. A broader vision is needed when making such policy decisions than the immediate need.

It’s not the prerogative of rich countries alone! The United States sent a man to the moon in the 1960s, yet there were both social and economic inequity present in that country then. People have aspirations, and these translate to nations having aspirations as well. While an individual’s aspirations may be small, collectively a country (even a relatively poor one), can have big aspirations. Achieving such aspirations can spur people on to bigger and better things, give them a sense of selfbelief. The Olympics motto “Citius, Altius, Fortius”, Latin for “Faster, Higher, Stronger” applies to more than sport!

A strong reality check! Are India or China devoting 30% of their national budget to these so called vanity projects? Or is it in fact a much smaller fraction? For India, the entire space programme merits 0.34% of the government expenditure. Also, are these (space faring) the only scientific missions espoused by such countries? The answer is no! India has had several extremely ambitious scientific missions that have underpinned its growth yet these are not mentioned in the same article by critics, since these projects (green revolution) seem directly linked to alleviation of a very visible problem. My argument is that the indirect linkage to other programmes is no less important to the nation’s development.

Space race! Some people are worried that India and China are now locked into a wasteful space race, each trying to outdo the other with ambitious space missions when instead they should look to their poor and needy. Indeed every country should look to its poor and needy. And at the same time, make real progress that is sustainable in a future where other countries may have developed more advanced technology. Every nation has to try and meet its future needs and cannot risk being left behind. The rivalry aspect (when not out of check) can result in positive competition, when the two nations can bring out the best in each other. I think if either India or China (or any emerging country) makes some astounding discoveries in space, it’s a contribution that can help all of mankind move forward, and be the “one small step for man and a giant leap for mankind!

The NSS: A flawed instrument

The road to hell is paved with good intentions, or so the saying goes. The National Student Survey or NSS is certainly an instrument borne of the best intentions. Primarily these are

– enabling students (and their parents) to compare universities and make an informed choice of both course and institution
– give feedback to universities, leading to an overall improvement of the student learning experience within institutions and for the higher education sector as a whole. Cartoon on feedback>

How is this done?
The NSS relies on a national independent survey of about 22 questions answered by mostly final year undergraduate students from Higher Educational Institutions (HEIs), Further Education Institutions/ Colleges (FEIs/FECs) and the like across the UK. A collation of these responses results in the annual NSS results that are eagerly awaited by students, parents, institutions, and the media.

Both students and institutions want information on performance relating to student satisfaction: the former to make choices about where to study and the latter to improve performance, strategise and market themselves effectively. On one hand it will influence the lives of thousands of students as they base their future study decisions on the survey. On the other, the standing and financial health of educational institutions (and the sector) can be heavily influenced by it, especially in the current scenario after the hike in tuition fees following the Browne report.

It’s fair to say that the NSS plays a vital role in the higher education market. For this reason we can’t afford for it to be less than perfect. Metrics and feedback are an extremely important way to gauge the performance of institutions. Therefore, in principle, the NSS should be an ideal way to meet the needs set out above and listed on the NSS website.

This leads to two questions:
1. Can students use the NSS to reliably compare very different institutions offering similar courses?

2. Does the NSS, as currently implemented, adequately measure student satisfaction?

The first concern is more fundamental than the second, but both require a considerable overhaul of the way that we think about course, programme and university evaluation.

Let’s take them in turn.

Can students use the NSS to reliably compare very different institutions offering similar courses?
Student satisfaction is an important metric of course evaluation, and it should be taken seriously. It is also an important metric for evaluation of the teaching performance of an institution.

However, it’s not at all obvious that very different institutions (for example, a former Polytechnic or small college and a Russell group university) can be compared on course quality for the same course. In our example, the former may have staff focused on teaching and better student-teacher time than the latter, while the latter may have internationally research leading faculty who can impart knowledge at the cutting edge level. The former may be based in a remote part of the UK offering excellent access to local student population and adding value to the local community, while the latter may attract an international student base giving a very cosmopolitan experience to students. Both institutions and courses have value, and environmental factors may interact with course delivery in different ways. Can they really be compared with the NSS?

Let’s look at some data from the NSS 2013 results.

For BSc in Economics scores (for overall satisfaction with the course) are:

NSS scores for BSc (Hons) Economics

For the BSc (Hons) in Physics score (for overall satisfaction with the course) are:
NSS scores for BSc (Hons) Physics

This is factual NSS data. Can it help students in comparing the teaching of Economics/Physics at these institutions?

Another critical and basic factor is this: what constitutes “value” may be different, or have different weight, in different programmes: for instance, a course in BA (Hons) Natural Philosophy may have different aims than would a course in BSc (Hons) Forensic Investigation and Biosciences. Again, can the NSS be used to compare them and can it tell prospective students which course is better value?

We need to understand what these objectives are, before we can determine how best to measure a course’s success in meeting them, or even when best to measure it. It may well be the case that the true value of a course will only emerge when a student in some way puts the course to use at the workplace, or goes on to graduate school. What is the appropriate time-frame for such a question? Is the student in the best position to determine this while the course is going on, or just immediately after?

There are other data available that can go some of the way towards answering these other questions: placement rates of universities, the research profile of an institution and so on. Using student satisfaction to evaluate a course and compare this across institutions is akin to using one temperature reading to determine the climate across a continent.

Near-contemporaneous feedback is great, because it helps course administrators to identify and fix problems in the way that a course is managed and delivered to the next cohort. But this assumes that the survey delivers consistent, reliable results that can be used to inform policy. That may be a strong assumption.

Let’s look at the second question:

Does the NSS, as currently implemented, adequately measure student satisfaction?

The NSS asks a series of standard, multiple-choice questions on aspects of course delivery such as teacher engagement, assignment assessment and intellectual content of a course. For it to perform as intended, we require (at least) the following to be true:
The NSS questionnaire should cover all significant aspects of course content;
Without a clear policy on what constitutes a “significant aspect” of course content, this is hard to judge. Does the statement “I find this course intellectually stimulating” cover “This course has acquainted me with the research frontier in this topic”, or “This course has given me a fresh perspective on this topic” or “This course has shown me the policy implications of this movement/discovery/technology”?
Without the option for open-ended questions, or for students to add comments, can we build in systematic revisions to the questionnaire process?
– Most universities undertake internal surveys for feedback from students for individual modules, courses and also have face-to-face feedback mechanisms. In meetings students raise their concerns and give valuable feedback: discussion allows for properly understanding the issues and engaging with students, coming to agreements and making positive changes. The NSS does not allow for suggestions or deeper understanding.

If two institutions differ on NSS scores for the same course, is this is entirely due to differences in course delivery between the two?

There are several factors- historical, environmental, cultural and economic- that are external to a course, and interact in unpredictable ways with the way that a course is delivered. Consider a course on the history of diplomatic relations with the Middle East- the identical course, but in one case taught in a multiethnic environment, and in the other taught in a more homogeneous classroom. Are the two experiences in any way comparable?

If the scores for a course in an institution are different between years, is this entirely due to differences in course delivery?

The results are presented by year. Therefore a small student cohort responding in a year as compared to a larger cohort, can lead to big statistical deviations year on year. Therefore the NSS score can fluctuate significantly just due to the number of respondents rather than the underlying teaching quality fluctuating. This is only partially explained in the fine print and may not be obvious to people using the survey. Though results for each year are available, there can be more meaningful measure such as the average trends.

Feedback is essential to make improvements in any system. The NSS can be a hugely important tool for that. In its current form, it may not be leading us as much as misleading us.

Being taught and teaching

What would it be like to sit in the eye of a hurricane? When do we notice disruptive changes that redefine our lives in some way?

I wonder if we only really appreciate the impact quite a while after the change has occurred. It seems to me that teaching and education are in that phase as well. cartoon comparing innovation and with dishwashers!

There are huge changes in the technology of teaching and discussions on the very philosophy of teaching which will be fully understood perhaps years later. In turn these changes are affecting technology and social behaviour. And here we are right in middle of it all perhaps unaware of size of the wave we are surfing!

MOOCs or Massive Open Online Courses have now been around for a few years and several top universities offer them (see the top few here). The pros and cons of MOOCs have and continue to be debated. With MOOCs the internet has made it possible for hundreds of thousands of people distributed globally to attend courses offered by one professor. From the students point of view this is free education from globally renowned experts at one’s convenience– wow! Interestingly to offer such a course, the instructor needs to consider practical aspects such as how more than 100,000 assignments can be graded? One solution: the use of crowdsourcing in assessment becomes important in the logistics of assessing and literally changes the nature of learning as well! How successful MOOCs can be in the long term is anyone’s guess. The more interesting question is what are the new things we will learn about learning through MOOCs?

Another aspect of teaching that is now being discussed is Just in time” approach. “Just in time” is a case of inductive learning and to an extent involves “flipping” the role of contact time in the teaching cycle: the students are asked to complete some learning outside the classroom on their own time and their feedback is used to inform classroom activities/lectures. This could mean quizzes being completed beforehand (encouraging students to engage with the material and thinking about it) and the feedback on these (what topics students found hardest, for example) being used to determine what topic class time would be focused on. Another way to interpret “just in time” is to assign students practical /problem solving projects and for them to learn the concepts and acquiring the knowledge essential to complete the project. All this is designed for learning to be more effective and interactive.

I want to be part of these developments too. So I have signed up to learn through a MOOC on solar cells and I am going to use “flipping” in my teaching this term. Then maybe I will have a stronger sense of the change that’s happening.