Sunday, 10 May 2015

Two Years Later...

There has been quite a quiet patch on this blog, so I thought I'd give a quick update, even though I suspect my readership has now dwindled away altogether.

I'm no longer a free-lancer, but instead I'm now a fully employed member of Whisk. The work is still fun and challenging, the colleagues are great, and I now regret that I didn't take the leap earlier. My heart aches when I think of all the years I wasted in academia, thinking conditions would improve.

So, in retrospect it has been the right choice to quit lecturing and to go out and work in the wild world...

Monday, 15 July 2013

Changing Places... and Names

The observant among my two readers might have noticed the name of this blog has changed. While in a sense I'm still learning and doing research, I'm no longer teaching as part of my job. So a name change seemed only appropriate.

As for teaching, that was the one aspect of my previous job that I was going to miss. However, as it turns out, I will be speaking at two conferences already in September, iOSDevUK in Aberystwyth, and BrightonSEO in, erm, Brighton. And unlike academic conferences, the organiser actually pays you to speak (in the form of accommodation/travel, or both).

One big worry when freelancing is that there won't be any work available. However, after 6 weeks I have pretty much had the opposite experience (though it might be a bit early to draw any conclusions yet). I'm currently working at a Birmingham start-up, Whisk, improving their linguistic analysis. It turns out, there is quite a demand for linguistic analysis 'in the wild' world of business. At the moment I'm doing about 8hrs a day of research, and no teaching or admin. Compared to the past years at the university, this seems like paradise. And of course, no pressure to produce publications no-one will read.

But as with the fairy-story paradise, it remains to be seen whether I will get chucked out at some point, when I have to do a boring contract to pay the bills. Or sit at home waiting for a phone call from a recruiter. At the moment though I'm optimistic that this will not happen anytime soon.

Stay tuned...

Sunday, 10 March 2013

Career Change

Interesting times... after 19 years at my current university, from Research Associate to Computer Officer to Lecturer, I have finally decided to leave academia to work as a contractor/consultant in IT and NLP.

I was thinking of writing a post about my reasons for this rather radical step, but I think it would quickly turn into a rant, and I don't want to dwell too much on negative things. To summarise, I am no longer happy with the way higher education is developing in this country. It's less and less focused on what I believe to be the point of academia, namely broadening people's horizons, and preparing them to be adaptable to anything life will throw in their way. But with the rise in costs students increasingly just care about assessments, and I can't really blame them. A further aspect is the importance of targets, especially in recruiting PG students, where bums on seats are all that counts. And the continuous fragmentation of the job, where there are so many little tasks to deal with at the same time that it is difficult to concentrate on one thing only at a time.

There have also been many positive aspects to it: curious students, friendly colleagues, interesting topics to work with. But I no longer feel that working in academia makes best use of my skills. It was a tough decision to make, but I don't want to say to myself in 20 years time "why did you not do something when there still was time?" And, given that I originally came on a one-year contract I think I managed to stay on for quite a long time.

Anyway, from June 1st this year I will be looking at HE from the outside only.

Wednesday, 4 July 2012

Education Emptiness

I have read too much Roger Schank. And I've been thinking too much. This is something best avoided, especially when marking student essays. Why? Because it makes you question what it is we're doing in Higher Education.

Assessing Essays

Marking essays is a soul-destroying task, as you have too little time to spend on each essay, and you have a large pile of essays to process.  Most students spend days and weeks on preparing their essays, so it always feels wrong to read and assess them in about half an hour, including writing up your feedback. This is very unsatisfactory, but otherwise one simply cannot turn around the marking in the allocated time.

But the worst thing about marking is its reductionist nature. An essay is a complex piece of writing, comprised of style, argument, expression of knowledge, understanding, interpretation, analysis, discussion etc etc. And all these different dimensions get conflated into a single point on a one-dimensional scale: a grade between about 40 and 70. This is just not right.

Many essays end up having the same numerical grade assigned to it, but they are not really comparable. One student might write eloquently but superficially, another provides deep insights with terrible grammar. One student might have a great idea, but not much understanding of the underlying concepts. Another one has solidly learned all what was required but lacks the creativity to apply the principles to a given problem. Yet, they all get the same numerical value. Different feedback, sure, but that does not really count for anything.

School children get detailed reports (besides a few simple letter grades), but in HE there are simply not the resources to do this, as there are too few staff and too many students, unless you are in Oxbridge. In principle that should not be an insurmountable problem.

You're doing it wrong!

After grading a mathematical crime is committed: the numerical grades are added up and averaged. This is simply not possible. The numbers are not numbers, they are labels that look like numbers. If we assigned the essays letters, then it would be more obvious: what is the average of A and B? But that is a completely different issue to be discussed on another day...

Essentially, then, after a lot of adding and averaging, the whole three years a student spends at university is reduced to a single label again, the degree classification. This is again an enormous reduction of a multitude of information into a single point out of four. And this point decides what possible career a student can then pursue...

As the degree class is so important (and expensive, especially for the incoming cohorts of students), this tends to be at the forefront of students' minds. This is of course a wild generalisation, and there are many exceptions, but from my experience a lot of students are primarily interested in getting good grades. Learning becomes secondary, and only the means to the end of achieving good grades. That means, curiosity, a central ingredient for successful learning, suffers, or rather, is redirected into finding out how to get grades. One cannot really blame students for trying to game the system, which is essentially what they learn to do in the end.

It does work elsewhere...

Postgraduate work is different, though, as it is less regimented. And, more importantly I think, PhD students do not get a grade. It's pass or fail. You either get a PhD, or you don't. There are of course, differences: you could get through with major corrections, minor correction, or no corrections. But nobody will know whether you scraped through with a 'revise and re-submit' or sailed through without any required corrections. If it works for PhDs, why not for UGs as well?

The problem is that we've got too many undergraduates, so there needs to be some differentiation. But why? Who wants it? Presumably those who employ graduates, so that they can see who is better or worse. But does the degree class really reflect vocational ability? I would doubt that, but in the end it is just another filter to reduce the 52 applications for each graduate job to a manageable number. With PhDs this is not so much of an issue, as you can get a more rounded picture by looking at their previous grades or even publications.

In-Conclusion

So what is the solution to this dilemma? It basically requires a system change, which is probably not feasible. Employers want differentiation, universities want to climb up the league tables (which nowadays tend to include employability metrics), and students want to have something that distinguishes them from the crowd. But in the process, education suffers. Learning is not really the focus of HE, and we're just churning out graduates who are good at spotting what is needed to get a good grade and doing just that.

We're assessing far too much, and it destroys what I think universities are all about: expanding your horizons, applying your knowledge and curiosity to interesting problems, be able to fail tasks without jeopardising your future career, and generally maturing and learning stuff.

Thursday, 12 January 2012

On-line Education?

I have last term completed the on-line module in Artificial Intelligence offered by Stanford's Sebastian Thrun and Google's Peter Norvig–both top-academics and experts in their field. I guess it was successful, as I received a grade of 79% (a 'first' in UK terms, but I have the suspicion it doesn't work like that). Given the minimal effort I put in (mainly due to lack of time) I could very likely have achieved a better result with some extra work. But with a full-time job it's not so easy to put aside 10 hours a week for doing so, which was the amount of time recommended by the course leaders.

So I got 79%, but did I learn anything? And do the 79% reflect my achievements? And what was the overall learning experience like?

First, the learning experience: the module was delivered as a series of short low-tech video lectures, interspersed with multiple-choice or number-entry quizzes. Then there was homework (multiple-choice and number-entry quizzes) and a mid-term and final exam (both multiple choice and... you get the idea).

The lectures were interesting: it was a camera view top-down on a piece of paper on which the lecturers would (hand-)write, not just a filmed 'lecture'. The tone was informal and friendly, and Thrun's charming German accent made me almost feel at home. And I also learned–from the few head-shot video sequences–that Peter Norvig likes colourful shirts.

The quizzes, however, were rather limited. There was the problem of turning quite complex material into a simple format, and also (which I found hardest) missing context. As a result the questions were often trivial side-aspects, or impossible to answer due to ambiguity (judging from the few forum posts I looked at, many other people had the same issue). You can interpret a question in many different ways, especially if you need to take into account external constraints which have not been clearly specified.

Quite often you get an answer wrong, and then look at the explanation of the proper outcome, and you think "oh right, that's how they meant it".

I quite struggled with Bayes networks, and I consistently got the wrong answers when asked how many independent parameters I would need to describe one. To this day I do not know why I need to know this. I can guess, but it wasn't really explained. Formal logic was one of the things I felt very comfortable with, as I had covered that in my own UG studies as a computational linguist, but I only got 1 out of 4 points in the final exam question, as I made one small error; based on that the subsequent answers were also wrong.

My best results were in computer vision–100%. And that's even though I'm short-sighted! But do I really understand computer vision so much better than all the other areas of AI? No. Thing is, all that was asked in the relevant quizzes was basic maths. There was a simple formula, relating various parameters such as focal length and distances to each other, and all you had to do was resolve the equation for different values and work out the result. I would have been able to do this beforehand, and didn't even learn that in the course. Still, I was assessed on it and scored 100%. But anybody with basic maths would have been able to do that, even without watching a single minute of any of the class videos.

So my first criticism is: the quizzes were not designed properly. There is a lot more one can do with multiple choice question, but Thrun and Norvig didn't do it. The assessments felt like an ad-hoc addition, along the lines of "I need a quiz now, so what could I ask?".

My second criticism is the way the scoring worked. One slight mistake, nil points. In a real exam you would get points for results which are wrong, but only because of a mistake in an intermediate step. An all-or-nothing approach is not very helpful.

Is this the future of education? Are on-line classes like this all we need? I don't think so. Apart from the implementation–it'd be easy to come up with some better quizzes–it's also quite detached. There is little direct interaction (impossible with 140,000+ students), and at times you feel a bit lost. It is obvious that this was an experiment, and as such it is not possible to expect wonderful and perfect results, but there is still a long way to go.

Did I learn anything I would not have learned from reading a book? Probably not. The main advantage for me was to have the pressure of getting through the weekly session before the hand-in date, which makes you put aside time you would otherwise spend on something else. So in that respect it is alright; and the fact that it was delivered on-line was convenient as you could choose the time when you wanted to study it. But while this is good for a supplementary course, I am glad I did have proper seminars and lectures when I went to university.

While you can't argue with a free course (you did get more than you paid for!), there is still a lot of scope for improvement for this particular type of course, an on-line distance course, and I cannot see it replacing 'proper' seminars any time soon. But it was overall an interesting experience, if only to find out what 'real' teaching should be like.

Thursday, 8 December 2011

Pointless Quizzes



This morning I completed the University's on-line Diversity Training. In principle a good idea, as it raises awareness about disadvantaging students (or members of staff) where you didn't think you were, but in practice just another thing to do during an already full schedule. And much of it was not relevant for me anyway, as I am not in a position to determine the level of pay of my fellow members of staff, either male or female.

What struck me, however, was a particularly bad quiz. I've been thinking about this in the context of the Stanford AI-Class (on which I will post soon - it'll be finished in two weeks' time), and here it came up again: there were two short on-line multiple-choice quizzes embedded in the course.

The first one was so simple that I could just guess the right answers without having needed to read the previous text. If they are so glaringly obvious that anybody can get them right just with a bit of common sense then it does not really contribute to a good view on the course as a whole - it just comes across as—literally!—a box-ticking exercise.

But the second quiz was even worse, and not only because I got some of the answers wrong. There were various scenarios given, and the four choices you had to choose from were: was this case a) victimisation b) direct discrimination c) indirect discrimination or d) nothing illegal.

Why on Earth do I need to know the difference between 'direct' and 'indirect' discrimination? As far as I am concerned, I need to know what is legal, and what is not, in other words there are only two relevant categories for me: 'discrimination' or 'no discrimination'. So I got several answers wrong because I chose 'direct' when the answer was clearly 'indirect' or the other way round. This was just plain annoying. I can see that I need this when wanting to work in the legal field of employment tribunals, but as a simple bod delivering seminars and lectures about language to students I couldn't care less—as long as I know that I'm not doing anything that would count as illegal discrimination.

But this seems to be a frequent pattern with many on-line tests. Because the people creating such tests have not thought about them properly they come up with spurious details that they ask for, just so that they can have four possible choices, when ideally you should start at the learning outcomes—what do the people taking the quiz need to have learned, and how can we test this?

It is perfectly possible to do really good and useful on-line multiple-choice quizzes, but it requires work and thinking about the purpose of it. Otherwise it just annoys people and makes them want to do things to you that I could not possibly mention on this blog.

More on this topic in a few weeks when I will be discussing my experiences with the on-line AI course...

Sunday, 25 September 2011

Why I deleted my FaceBook account

I have just deleted my Facebook account. I cannot remember exactly when I joined, but it was probably 5 or 6 years ago. A while ago I already removed most information about me (such as what music or books I liked), as I felt increasingly uncomfortable with FB's way of making more and more of your information about you available to other people, unless you explicitly disallowed it. This did not feel very honest to me.

And then today came the proverbial straw: I read two (unrelated) posts about FB in direct succession which convinced me that it was finally time to cut the cord. The first [1], showed how FB does not really 'log you out' when you log out - it keeps certain cookies in place which can identify you. I don't use many public computers (especially not with FB) so this does not overly concern me, but I see this as yet a further violation of default expectable privacy.

The second article was vaguely similar, and shows how FB can track where you have been, and other sites can post on your 'wall' when you simply read a webpage. This is just silly. I'm - again - not overly concerned about this (along the lines that I don't generally do things which are illegal or immoral), but on top of that it just contributes to the already existing information overload. If I need to care that person X read webpage Y then I would expect X to tell me. I don't want a stream of activities swamped with reports what websites people I know have visited.

Anyway, those two articles were enough to sway me far enough to permanently delete my account. Not sure what 'permanently' means in this context. For at least the next 14 days FB keeps my account in case I'll change my mind, and I don't exactly trust it to delete anything for real anyway. Remember, in the FB business model, FB's assets are you, its users and their data, which they mine and sell on to other people.

Will I miss FB? I haven't really used it that much in the first place. I'm much more active on Twitter, which is somewhat less intrusive and has fewer opportunities to do stuff with my data. I won't now not as easily be updated on what some family members who live abroad are doing, but there are other ways of keeping in touch. The main issue is our postgraduate students (I am coordinating our English Language PG students) - they have recently set up a FB page, which I now won't be able to see. But most things are still posted on a traditional mailing-list anyway.

On the positive side, I no longer need to deliberate whether I accept somebody who wants to be my friend or snub them if I only vaguely know them. Fewer decisions to make equals more happiness.

It feels weird, cutting the cord, as it did with any account on any system I spent a reasonable amount of time on, but in the long run I don't think I will shed any tears over it. I'm just concerned that FB will spread its tentacles out further, so that at some point in the future everybody is expected to have a FB account, and you cannot do certain things without one. Matrix, anyone?








[1] Apologies for the shortened (and thus opaque) links - they're directly copied from the corresponding tweets