Smart Students, Smart Shirts and Smart Outcomes

I feel so happy today in writing this blogpost.

rodrigo-2Rodrigo, a student who did his final year  dissertation under my supervision, designed a smart shirt to detect temperature, ECG and sweat. More on his project can be seen in this  file: rodrigo-brochure.

Rodrigo designed and implemented  a smart wearable shirt, which monitored real time heart rate and upper thoracic temperature, as well detected sweat secretion as indication of dehydration.

He not only got a 1st in his project, but this project was chosen for 3rd prize in Made at City Competition. Rodrigo won a medal, and a cash prize!rodrigo-1

The work he did was amazing: he applied so much of what he had learned in his course on Biomedical Engineering and the research reading he did to something practical.

Every year we supervise many students and some of them get good grades. But it is rare to get a student with such passion who really applies the knowledge earned to create something useful in such an innovative manner.

I enjoyed working with him and I hope his story can encourage other students in doing work they find exciting and enjoyable.

Certainly as his supervisor I enjoyed it!

 

Fun, Games and Teaching

Recently in my first year Physics class I posed a problem to my students:

What is the most efficient way of determining if a given point lies inside a given polygon?

At the end of the context, I give the code my students sent me. And a game one of them created!!!

This is a real and living research problem. For me the context  is this: in the Finite Element mesh I construct, sometimes we want to move one of the nodes in a mesh element so that the photonic structure/boundary is better represented. The trouble is determining which node to move out of all the nodes in the element. Since you may have to do this for a large mesh with close to 100,000 elements, it must be fast, accurate and reliable.

So the students sent me this:

  1. Python code (by Riad Ibadulla) :

n=int(input(“Enter the size of polygon: “))
#input——————————————
x=[]
y=[]
z=0
j=0
print(“Enter the coordinates of polygon: “)
for i in range(n):
x.append(int(input(“x=”)))
y.append(int(input(“y=”)))
print(“Enter the coordiantes of the point: “)
x0=input(“x=”)
y0=input(“y=”)
#checking————————————–
for i in range(n):
z=(x0-x[i])*(y[i-(n-1)]-y[i])-(y0-y[i])*(x[i-(n-1)]-x[i])
if z>0:
j+=1
if j==n:
print(“belongs to polygon”)
else:
print(“out of the polygon”)

2. A game, Galactic Waste Man (by Kenneth Evbuomwan) that uses logic that is central to the problem posed. The code is available here: GameController.

So can your algorithm match/beat what my first years have done?

 

A different kind of pride

Usually my blogposts here are about topics thaWP_20151029_18_17_34_Prot interest me (on Science, policy, equality and diversity etc.) and my own research.

Today I feel especially proud writing about my PhD student, Swetha. Swetha entered the UK ICT Pioneers competition of the EPSRC  in the information overload category. She  made a short film about her research work, which took her to the finals of the competition and was amongst only 15 chosen, with only 4 including her in her category. There she presented her work on a poster and explained it to a rather tough panel of judges drawn from EPSRC, companies like Hewlett Packard, Samsung and Facebook.

As a student only in her first year of PhD she was the youngest (in research age and stage of PhD). So this was a huge achievement. It has also spurred her onto believe in her own merit and that she can do great Science.

It was a lovely experience for me as a supervisor to attend the awards night and see my student feted and acknowledged.

Academia is a strange life: for a very long time we work hard to reach the lowest rung of this ladder (10+ years of study to finish postdoc and get a first faculty position in many cases). We are regarded  as junior for so long that it is often not till much later when we supervise PhD students that the realization of the end of our student days dawns upon us.

Having one’s own students do well is the best possible way to realize I am no longer a student!! So my thanks to Swetha.

 

Teamwork and jigsaws

I was looking at the work of Daniel Stokols, an environmental psychologist, to see if I could find any work on how buildings and their design impacts reactions of students.

What I did find was the Jigsaw Classroom, a cooperative learning technique. The technique was developed to help teach in racially diverse classrooms and avoid situations like the Columbine massacre developing.

The idea is to get students to work in teams in such an effective manner, with each student acting as the proverbial vital piece of the jigsaw, that completes the puzzle. In this technique all members of the team learn to respect the importance and contribution of every other team members, learn to work with them, overcoming distrust and fear. Each person reliases that no one can be a freeloader or minimise the work of others.

It should be possible to adapt this method to tackle any kind of divisiveness based on difference of gender, sexual orientation, race, religion, age and even disability.

I find it amazing that there are such innovative techniques to help delvier better teaching!

In a bind

Student days… my glasses are misting in memory of those fun filled days: laughter, friends, evenings out, and such like.

Unremarkably, this kind of memory is a selective. In these sepia printed snapshots of my youth, there is no sign of the tensions associated with exams, report submissions and incomplete projects!

Well, as it is, I dont have to rely on memory for that!

I am attending the MA in Academic Practice here at City Uni London and along with the fabulous learning (on how to teach) come the assignments, group work, assessments and the like. So if this post is a little rambling forgive me for my brains are scrambled! Right now I am well behind on two chunks of my module on Technology Enhanced Academic Practice (TEAP):

– social bookmarking: we need to use a tool like DIIGO to bookmark sites and write about some of these
– the final project (also the biggest chunk of the module!)

This module has been all about using technology in a blended learning environment: mixing traditional teaching methods and approaches with technology. All of it felt new (even the traditional teaching and learning models, leave alone the new stuff).
I learned about things like:
•Salmon’s five step model
•Community of Practice/Community of Inquiry
•Reflective practice
•Constructive alignment
•Social constructivism
•Behaviourism
•Problem-based learning
•Connectivism
•Situated Learning
•Humanism
•Laurillard’s Conversational Framework
•Cognitivism

So the tech stuff (which is increasingly important given the digital nature of our lives) in the course was on:
– how to use online communities in teaching and learning. It is possible to use wikis and blogs etc. to get students to interact with one another as well as the instructor to learn, sometimes remotely and even asynchronously
– reflection: getting students to reflect on their work and learning to improve their learning outcomes
– tools like Diigo for social bookmarking, blogs for interaction

My project is on formative assessment using tools like Moodle (online platform used my many universities for education). The formative assessment idea is that students learn as they test themselves, but they learn better if they can get feedback very quickly (even real time) and they are able to judge where they are weak, or what mistakes they tend to make.
So I (over reached) and said that I would make a series of short numerical Physics questions, which can be delviered via Moodle and be set up in such a way that students can get feedback as they attempt the question.

Not only that! I then thought this was too easy. So I would then make a series of short videos explaining how people could use Moodle to set up such tests, since many colleagues find it difficult to do this. What was I thinking?!!!!

And now here I am, using this blog and online community to moan about the hardships of my student life. I have enjoyed the course tremendously and it has sparked so many new ideas in my head about teaching. But it has also reminded me how my students feel.

So if anyone of you is in danger of forgetting the pressures of student life, enroll into a course now!

Technical education

I recently enrolled in the University’s MA in Academic Practice programme to learn about well, learning and teaching. I am keen to enhance and improve my teaching using the latest technology available as well insights that learning theory can give.

So I entered the world where humanism and meta cognitive theory[1] (and many others) are approaches within the context of which learning is looked at. How do these frameworks impact teaching design, curriculum, learning, assessment and so much more.

The impact of technology in terms of the “blended learning” environment, “chunking” and how our information processing capability is limited [2], reflection, wikis… all this has now entered my life.

Perhaps it is this interest that has led me to noticing more articles about teaching or maybe more people are writing about it in STEM, I do not know.

In a recent issue of Physics World devoted to eduation, many articles addressed making Physics more interesting to young students (ages 5-16), designing curriculum with this in mind; about educating or informing general audiences about Physics using Youtube.

In other posts I have mentioned Just in Time teaching and topdown approaches to teaching that some people advocate: talk about a big problem (how does you recognise a picture?) that interest students and then breaks them down level by level to get them to the concepts they have to learn in the class/module. This keeps more students engaged than the regular approach of bottom up..

I am yet to find an article/theory/approach about effectively teaching mathematical content (within Physics for example) at university level. My class has a large distribution in terms of both interest and ability in Mathematics: some students enjoy it and are adept while others find math difficult and scary. The challenge is how to engage and develop the ability of the weaker students. In my view (everyone may not agree) it is difficult to go to a higher level in Physics without being able to do Mathematics.

If you have any ideas on how to solve this puzzle get in touch!

References:
1. Anderson, T (ed), Elloumi, F (ed). (2004) Theory and Practice of Online Learning.
2. Miller, G. A. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychological Review, 63, 81-97

The Browne review of higher education

The Browne report on Higher Education (HE) was published about 3 years ago. Some of its recommendations have been implemented and we are now beginning to feel the changes. So it may seem odd that here I choose not to examine how the recommended changes are faring. Instead I want to look at the basic premise of the review.

Why?

I expect in the future we may see more reviews and more changes. It is vital to get the basis on which we base our decisions right. Also, the recommended £9000 tuition fees cap is now being deemed as too low to be sustainable by some in higher education.
The remit of the review was restricted to examine the teaching provision across institutions of higher education in England and Wales. Research was excluded.

This leads to the first fundamental issue with the review.

An inappropriate business/economic model: In the present era almost all universities carry out research, teaching, policy, outreach, consultancy work, disseminating knowledge that filters into society and sets the context for discourse in a modern society etc., in other words there are multiple tasks and outputs connected to Higher Education Institutes (HEIs). The economic model/logic implicit in the review only considers one output: teaching. All other outputs and business including research are ignored.

However economic studies on principal-agent modelling have shown that where there are several tasks performed by an agent (some of which are less measurable than others) it is inefficient to base performance-based rewards and incentives only on the measurable output. It leads to agents prioritising this output and neglecting the others that may be equally or even more important. It can also lead to agents skewing metrics to show attainment. This view is supported by another paper(1) on teaching performance and see footnote*. Such happenstance can harm the very output that the review aims to improve: teaching provision.

The Browne review hoped to use market forces (students choose the best value for money HEI) as a way to stimulate across the sector improvement in teaching, better access to students, and lowering the outlay from the public purse towards HE. However, since teaching is not the only output of the HEIs and the other outputs are not so easily measurable (by way of student preference for a course at an institution), it is flawed at its very heart in being able to achieve its stated objectives. Indeed, it may well lead to other less beneficial changes in the long run as well: risking the quality of research output of HEIs, all universities raising their tuition fees to the capped value, courses that have lower market appeal (as perceived by students) disappearing and an overcrowding in some courses.

The second flaw in the review is ignoring the nature of the HE market and using the wrong indicator to measure teaching performance. Several factors determine the reputation and desirability of an HEI as a place to undertake taught study. Paradoxically research is chief amongst these. Students often use the reputation of an institution as a key metric in choosing where to study (Russell group universities are often the aspiration for students because in the job market the university brand name carries weight), though this reputation can be more directly linked to research than teaching. If we consider student satisfaction as a metric and look at the results of National Student Survey, we find that for identical courses offered by several universities, in many cases Russell group universities score lower than smaller and less research intensive counterparts. Yet, consistently year on year the places are filled at the Russell group universities faster.

The underlying assumption in the report that students choose to study at an HEI for its teaching alone or primarily is not correct. Therefore student choice for some HEIs cannot be used as a reliable indicator of the teaching excellence of the HEIs; it is in fact not the right indicator or metric for the economic model the review employs. A more deep understanding of the nature of relationships at play in the HE market is needed to hew the necessary measurement instruments.

The third basic issue lies in the approach when commissioning the report. Universities and HEIs are similar to businesses in some aspects, but are not by their nature a business in the same way as a shareholder owned enterprise. The stakeholder distribution for the two is not identical and therefore using the lens of free market economics to look at HE is not right.

Businesses exist to provide a profit to the shareholder/owners through providing service to customers. Profit is the chief concern and may at occasions even be at odds with customer concerns. On the other hand, HEIs have several stakeholders: students, research councils, companies, charities and other funding bodies, the larger society. The primary concern for HEIs is rarely monetary profit. Most HEIs are also charitable institutions. Therefore, it makes little sense to apply a treatment (that works for commercial businesses) to HEIs when the two are completely distinct species of institutions. Some may say this is the foremost argument against the way this review was conducted.

Any future reviews would I hope address the totality of the causes and effects that play a part in shaping higher education.

Footnotes and references
*As a concrete illustration of the distortions that testing can cause, in 1989 a ninth-grade teacher in Greenville, South Carolina was caught having passed answers to questions on the statewide tests of basic skills to students in her geography classes in order to improve her performance rating (Wall Street Journal, November 2, 1989
1. Hannaway, Jane. 1991. “Higher Order Thinking, Job Design and Incentives: An Analysis and Proposal,” American Education Research Journal (forthcoming).

The NSS: A flawed instrument

The road to hell is paved with good intentions, or so the saying goes. The National Student Survey or NSS is certainly an instrument borne of the best intentions. Primarily these are

– enabling students (and their parents) to compare universities and make an informed choice of both course and institution
– give feedback to universities, leading to an overall improvement of the student learning experience within institutions and for the higher education sector as a whole. Cartoon on feedback>

How is this done?
The NSS relies on a national independent survey of about 22 questions answered by mostly final year undergraduate students from Higher Educational Institutions (HEIs), Further Education Institutions/ Colleges (FEIs/FECs) and the like across the UK. A collation of these responses results in the annual NSS results that are eagerly awaited by students, parents, institutions, and the media.

Both students and institutions want information on performance relating to student satisfaction: the former to make choices about where to study and the latter to improve performance, strategise and market themselves effectively. On one hand it will influence the lives of thousands of students as they base their future study decisions on the survey. On the other, the standing and financial health of educational institutions (and the sector) can be heavily influenced by it, especially in the current scenario after the hike in tuition fees following the Browne report.

It’s fair to say that the NSS plays a vital role in the higher education market. For this reason we can’t afford for it to be less than perfect. Metrics and feedback are an extremely important way to gauge the performance of institutions. Therefore, in principle, the NSS should be an ideal way to meet the needs set out above and listed on the NSS website.

This leads to two questions:
1. Can students use the NSS to reliably compare very different institutions offering similar courses?

2. Does the NSS, as currently implemented, adequately measure student satisfaction?

The first concern is more fundamental than the second, but both require a considerable overhaul of the way that we think about course, programme and university evaluation.

Let’s take them in turn.

Can students use the NSS to reliably compare very different institutions offering similar courses?
Student satisfaction is an important metric of course evaluation, and it should be taken seriously. It is also an important metric for evaluation of the teaching performance of an institution.

However, it’s not at all obvious that very different institutions (for example, a former Polytechnic or small college and a Russell group university) can be compared on course quality for the same course. In our example, the former may have staff focused on teaching and better student-teacher time than the latter, while the latter may have internationally research leading faculty who can impart knowledge at the cutting edge level. The former may be based in a remote part of the UK offering excellent access to local student population and adding value to the local community, while the latter may attract an international student base giving a very cosmopolitan experience to students. Both institutions and courses have value, and environmental factors may interact with course delivery in different ways. Can they really be compared with the NSS?

Let’s look at some data from the NSS 2013 results.

For BSc in Economics scores (for overall satisfaction with the course) are:

NSS scores for BSc (Hons) Economics

For the BSc (Hons) in Physics score (for overall satisfaction with the course) are:
NSS scores for BSc (Hons) Physics

This is factual NSS data. Can it help students in comparing the teaching of Economics/Physics at these institutions?

Another critical and basic factor is this: what constitutes “value” may be different, or have different weight, in different programmes: for instance, a course in BA (Hons) Natural Philosophy may have different aims than would a course in BSc (Hons) Forensic Investigation and Biosciences. Again, can the NSS be used to compare them and can it tell prospective students which course is better value?

We need to understand what these objectives are, before we can determine how best to measure a course’s success in meeting them, or even when best to measure it. It may well be the case that the true value of a course will only emerge when a student in some way puts the course to use at the workplace, or goes on to graduate school. What is the appropriate time-frame for such a question? Is the student in the best position to determine this while the course is going on, or just immediately after?

There are other data available that can go some of the way towards answering these other questions: placement rates of universities, the research profile of an institution and so on. Using student satisfaction to evaluate a course and compare this across institutions is akin to using one temperature reading to determine the climate across a continent.

Near-contemporaneous feedback is great, because it helps course administrators to identify and fix problems in the way that a course is managed and delivered to the next cohort. But this assumes that the survey delivers consistent, reliable results that can be used to inform policy. That may be a strong assumption.

Let’s look at the second question:

Does the NSS, as currently implemented, adequately measure student satisfaction?

The NSS asks a series of standard, multiple-choice questions on aspects of course delivery such as teacher engagement, assignment assessment and intellectual content of a course. For it to perform as intended, we require (at least) the following to be true:
The NSS questionnaire should cover all significant aspects of course content;
Without a clear policy on what constitutes a “significant aspect” of course content, this is hard to judge. Does the statement “I find this course intellectually stimulating” cover “This course has acquainted me with the research frontier in this topic”, or “This course has given me a fresh perspective on this topic” or “This course has shown me the policy implications of this movement/discovery/technology”?
Without the option for open-ended questions, or for students to add comments, can we build in systematic revisions to the questionnaire process?
– Most universities undertake internal surveys for feedback from students for individual modules, courses and also have face-to-face feedback mechanisms. In meetings students raise their concerns and give valuable feedback: discussion allows for properly understanding the issues and engaging with students, coming to agreements and making positive changes. The NSS does not allow for suggestions or deeper understanding.

If two institutions differ on NSS scores for the same course, is this is entirely due to differences in course delivery between the two?

There are several factors- historical, environmental, cultural and economic- that are external to a course, and interact in unpredictable ways with the way that a course is delivered. Consider a course on the history of diplomatic relations with the Middle East- the identical course, but in one case taught in a multiethnic environment, and in the other taught in a more homogeneous classroom. Are the two experiences in any way comparable?

If the scores for a course in an institution are different between years, is this entirely due to differences in course delivery?

The results are presented by year. Therefore a small student cohort responding in a year as compared to a larger cohort, can lead to big statistical deviations year on year. Therefore the NSS score can fluctuate significantly just due to the number of respondents rather than the underlying teaching quality fluctuating. This is only partially explained in the fine print and may not be obvious to people using the survey. Though results for each year are available, there can be more meaningful measure such as the average trends.

Feedback is essential to make improvements in any system. The NSS can be a hugely important tool for that. In its current form, it may not be leading us as much as misleading us.

Being taught and teaching

What would it be like to sit in the eye of a hurricane? When do we notice disruptive changes that redefine our lives in some way?

I wonder if we only really appreciate the impact quite a while after the change has occurred. It seems to me that teaching and education are in that phase as well. cartoon comparing innovation and with dishwashers!

There are huge changes in the technology of teaching and discussions on the very philosophy of teaching which will be fully understood perhaps years later. In turn these changes are affecting technology and social behaviour. And here we are right in middle of it all perhaps unaware of size of the wave we are surfing!

MOOCs or Massive Open Online Courses have now been around for a few years and several top universities offer them (see the top few here). The pros and cons of MOOCs have and continue to be debated. With MOOCs the internet has made it possible for hundreds of thousands of people distributed globally to attend courses offered by one professor. From the students point of view this is free education from globally renowned experts at one’s convenience– wow! Interestingly to offer such a course, the instructor needs to consider practical aspects such as how more than 100,000 assignments can be graded? One solution: the use of crowdsourcing in assessment becomes important in the logistics of assessing and literally changes the nature of learning as well! How successful MOOCs can be in the long term is anyone’s guess. The more interesting question is what are the new things we will learn about learning through MOOCs?

Another aspect of teaching that is now being discussed is Just in time” approach. “Just in time” is a case of inductive learning and to an extent involves “flipping” the role of contact time in the teaching cycle: the students are asked to complete some learning outside the classroom on their own time and their feedback is used to inform classroom activities/lectures. This could mean quizzes being completed beforehand (encouraging students to engage with the material and thinking about it) and the feedback on these (what topics students found hardest, for example) being used to determine what topic class time would be focused on. Another way to interpret “just in time” is to assign students practical /problem solving projects and for them to learn the concepts and acquiring the knowledge essential to complete the project. All this is designed for learning to be more effective and interactive.

I want to be part of these developments too. So I have signed up to learn through a MOOC on solar cells and I am going to use “flipping” in my teaching this term. Then maybe I will have a stronger sense of the change that’s happening.

University: the wonder years?

The clearing process is just beginning the long winding up in the UK. Clearing is a process by which students who don’t already have a university place offer, apply to various universities for available places. Picture call centres staffed by academics, answering frantic calls from students from 7.30am of the day A level results are declared. Universities trying to fill their vacancies and vying with each other for students with good grades, students with sufficiently high grades shop around for the best offer, and those with low grades scrounging for a place, any place. It is bedlam!

Every country has its own version of the admissions process to university. Each probably has its own madness attendant.
The range of issues facing students are quite bewildering:
– High tuition fees – is it really worth getting a degree?
– Study in a small, sleepy university town or go off to a big city? Go to another country?
– Contending with a global market- which degree and university will guarantee a job/placement/further study in such intense competition?
– Does a vocational course/apprenticeship hold better career prospects
– Logistics: housing, loans, social life…

It cant be easy to try and figure out answers to all those questions. Further study is not any easier with some variation of these very issues weighting on a person, in addition to others. All in all the start to a higher/further education is more and more challenging.

Is it worth all the bother?

I guess every individual needs to work out the answer for themselves. Do students have access to sufficient information to make well thought out decisions about these issues? Do they have adequate guidance and support from parents and schools? Some things like how a particular industry (and hence graduates in a linked degree) will fare 3 years down the line is extrapolation- how does anyone make a good call on that?

To me, university was about pursuing a subject I loved. I didn’t have the maturity or the experience to know all the options it could create for me, just a nebulous idea. Now, many years later, I realise that (for my particular profession of academics) the technical knowledge gained was important, and so were the skills I developed: analytical thinking, ability to concentrate for long periods of time, writing, ability to work with others and engaging and making friends. The list is longer but some things are intangible and we don’t even realise how we learned these from our peers. To me the uni years were truly the wonder years.

What is/was it like for you?