It's now five years since I left academia. Which is weird, as time went by extremely quickly. And, somehow I still feel part-academic. Not only in that I approach many issues the way I would have done when working as a researcher, but also because I am still in touch with many academics on social media; mostly from the time I first started out on twitter, and with colleagues (often from other universities) working in the same field as me.
One issue that always is at the forefront of my mind is that I have no regrets. Life outside the ivory tower is maybe a bit less certain, but it feels a lot more sensible: No periods of binge-marking essays twice a year, or being dependent on anonymous student feedback. No trying to apply for promotion when management constantly pushes the goalposts around. No pointless teaching/research assessments. No wasting time on writing grant applications with an 80+% failure rate. And a lot fewer emails. Up to the point that I sometimes wonder if my email is broken.
In short, I'm happy to be where I am. It was an interesting time working in academia, and sometimes I miss it, but overall I'm glad I left when I did.
Thursday 30 August 2018
Tuesday 28 March 2017
Two Years Later...
After two years, I am no longer a 'Whisker'. Instead, I now have a new job which is a much better fit for my original (academic) background.
The company I now work at is Artificial Solutions, building bespoke virtual assistants for commercial websites (but also self-standing bots such as Indigo (available as an iOS app) and Elbot. My official job title is Knowledge Engineer, and I work designing dialogue flows and general backend NL components.
To be honest, I'm glad that I am no longer at a start-up. The culture is a bit wearing, especially if you're about the oldest person in the office (at least among the developers). I also didn't like to be bossed about by people who were less experienced and knew everything better...
So, who knows, maybe I'll post again in two years about what happened since. In the meantime, I have a new blog which is updated more frequently.
To be honest, I'm glad that I am no longer at a start-up. The culture is a bit wearing, especially if you're about the oldest person in the office (at least among the developers). I also didn't like to be bossed about by people who were less experienced and knew everything better...
So, who knows, maybe I'll post again in two years about what happened since. In the meantime, I have a new blog which is updated more frequently.
Sunday 10 May 2015
Two Years Later...
There has been quite a quiet patch on this blog, so I thought I'd give a quick update, even though I suspect my readership has now dwindled away altogether.
I'm no longer a free-lancer, but instead I'm now a fully employed member of Whisk. The work is still fun and challenging, the colleagues are great, and I now regret that I didn't take the leap earlier. My heart aches when I think of all the years I wasted in academia, thinking conditions would improve.
So, in retrospect it has been the right choice to quit lecturing and to go out and work in the wild world...
Monday 15 July 2013
Changing Places... and Names
The observant among my two readers might have noticed the name of this blog has changed. While in a sense I'm still learning and doing research, I'm no longer teaching as part of my job. So a name change seemed only appropriate.
As for teaching, that was the one aspect of my previous job that I was going to miss. However, as it turns out, I will be speaking at two conferences already in September, iOSDevUK in Aberystwyth, and BrightonSEO in, erm, Brighton. And unlike academic conferences, the organiser actually pays you to speak (in the form of accommodation/travel, or both).
One big worry when freelancing is that there won't be any work available. However, after 6 weeks I have pretty much had the opposite experience (though it might be a bit early to draw any conclusions yet). I'm currently working at a Birmingham start-up, Whisk, improving their linguistic analysis. It turns out, there is quite a demand for linguistic analysis 'in the wild' world of business. At the moment I'm doing about 8hrs a day of research, and no teaching or admin. Compared to the past years at the university, this seems like paradise. And of course, no pressure to produce publications no-one will read.
But as with the fairy-story paradise, it remains to be seen whether I will get chucked out at some point, when I have to do a boring contract to pay the bills. Or sit at home waiting for a phone call from a recruiter. At the moment though I'm optimistic that this will not happen anytime soon.
Stay tuned...
As for teaching, that was the one aspect of my previous job that I was going to miss. However, as it turns out, I will be speaking at two conferences already in September, iOSDevUK in Aberystwyth, and BrightonSEO in, erm, Brighton. And unlike academic conferences, the organiser actually pays you to speak (in the form of accommodation/travel, or both).
One big worry when freelancing is that there won't be any work available. However, after 6 weeks I have pretty much had the opposite experience (though it might be a bit early to draw any conclusions yet). I'm currently working at a Birmingham start-up, Whisk, improving their linguistic analysis. It turns out, there is quite a demand for linguistic analysis 'in the wild' world of business. At the moment I'm doing about 8hrs a day of research, and no teaching or admin. Compared to the past years at the university, this seems like paradise. And of course, no pressure to produce publications no-one will read.
But as with the fairy-story paradise, it remains to be seen whether I will get chucked out at some point, when I have to do a boring contract to pay the bills. Or sit at home waiting for a phone call from a recruiter. At the moment though I'm optimistic that this will not happen anytime soon.
Stay tuned...
Sunday 10 March 2013
Career Change
Interesting times... after 19 years at my current university, from Research Associate to Computer Officer to Lecturer, I have finally decided to leave academia to work as a contractor/consultant in IT and NLP.
I was thinking of writing a post about my reasons for this rather radical step, but I think it would quickly turn into a rant, and I don't want to dwell too much on negative things. To summarise, I am no longer happy with the way higher education is developing in this country. It's less and less focused on what I believe to be the point of academia, namely broadening people's horizons, and preparing them to be adaptable to anything life will throw in their way. But with the rise in costs students increasingly just care about assessments, and I can't really blame them. A further aspect is the importance of targets, especially in recruiting PG students, where bums on seats are all that counts. And the continuous fragmentation of the job, where there are so many little tasks to deal with at the same time that it is difficult to concentrate on one thing only at a time.
There have also been many positive aspects to it: curious students, friendly colleagues, interesting topics to work with. But I no longer feel that working in academia makes best use of my skills. It was a tough decision to make, but I don't want to say to myself in 20 years time "why did you not do something when there still was time?" And, given that I originally came on a one-year contract I think I managed to stay on for quite a long time.
Anyway, from June 1st this year I will be looking at HE from the outside only.
I was thinking of writing a post about my reasons for this rather radical step, but I think it would quickly turn into a rant, and I don't want to dwell too much on negative things. To summarise, I am no longer happy with the way higher education is developing in this country. It's less and less focused on what I believe to be the point of academia, namely broadening people's horizons, and preparing them to be adaptable to anything life will throw in their way. But with the rise in costs students increasingly just care about assessments, and I can't really blame them. A further aspect is the importance of targets, especially in recruiting PG students, where bums on seats are all that counts. And the continuous fragmentation of the job, where there are so many little tasks to deal with at the same time that it is difficult to concentrate on one thing only at a time.
There have also been many positive aspects to it: curious students, friendly colleagues, interesting topics to work with. But I no longer feel that working in academia makes best use of my skills. It was a tough decision to make, but I don't want to say to myself in 20 years time "why did you not do something when there still was time?" And, given that I originally came on a one-year contract I think I managed to stay on for quite a long time.
Anyway, from June 1st this year I will be looking at HE from the outside only.
Wednesday 4 July 2012
Education Emptiness
I have read too much Roger Schank. And I've been thinking too much. This is something best avoided, especially when marking student essays. Why? Because it makes you question what it is we're doing in Higher Education.
Assessing Essays
Marking essays is a soul-destroying task, as you have too little time to spend on each essay, and you have a large pile of essays to process. Most students spend days and weeks on preparing their essays, so it always feels wrong to read and assess them in about half an hour, including writing up your feedback. This is very unsatisfactory, but otherwise one simply cannot turn around the marking in the allocated time.
But the worst thing about marking is its reductionist nature. An essay is a complex piece of writing, comprised of style, argument, expression of knowledge, understanding, interpretation, analysis, discussion etc etc. And all these different dimensions get conflated into a single point on a one-dimensional scale: a grade between about 40 and 70. This is just not right.
Many essays end up having the same numerical grade assigned to it, but they are not really comparable. One student might write eloquently but superficially, another provides deep insights with terrible grammar. One student might have a great idea, but not much understanding of the underlying concepts. Another one has solidly learned all what was required but lacks the creativity to apply the principles to a given problem. Yet, they all get the same numerical value. Different feedback, sure, but that does not really count for anything.
School children get detailed reports (besides a few simple letter grades), but in HE there are simply not the resources to do this, as there are too few staff and too many students, unless you are in Oxbridge. In principle that should not be an insurmountable problem.
You're doing it wrong!
After grading a mathematical crime is committed: the numerical grades are added up and averaged. This is simply not possible. The numbers are not numbers, they are labels that look like numbers. If we assigned the essays letters, then it would be more obvious: what is the average of A and B? But that is a completely different issue to be discussed on another day...
Essentially, then, after a lot of adding and averaging, the whole three years a student spends at university is reduced to a single label again, the degree classification. This is again an enormous reduction of a multitude of information into a single point out of four. And this point decides what possible career a student can then pursue...
As the degree class is so important (and expensive, especially for the incoming cohorts of students), this tends to be at the forefront of students' minds. This is of course a wild generalisation, and there are many exceptions, but from my experience a lot of students are primarily interested in getting good grades. Learning becomes secondary, and only the means to the end of achieving good grades. That means, curiosity, a central ingredient for successful learning, suffers, or rather, is redirected into finding out how to get grades. One cannot really blame students for trying to game the system, which is essentially what they learn to do in the end.
It does work elsewhere...
Postgraduate work is different, though, as it is less regimented. And, more importantly I think, PhD students do not get a grade. It's pass or fail. You either get a PhD, or you don't. There are of course, differences: you could get through with major corrections, minor correction, or no corrections. But nobody will know whether you scraped through with a 'revise and re-submit' or sailed through without any required corrections. If it works for PhDs, why not for UGs as well?
The problem is that we've got too many undergraduates, so there needs to be some differentiation. But why? Who wants it? Presumably those who employ graduates, so that they can see who is better or worse. But does the degree class really reflect vocational ability? I would doubt that, but in the end it is just another filter to reduce the 52 applications for each graduate job to a manageable number. With PhDs this is not so much of an issue, as you can get a more rounded picture by looking at their previous grades or even publications.
In-Conclusion
So what is the solution to this dilemma? It basically requires a system change, which is probably not feasible. Employers want differentiation, universities want to climb up the league tables (which nowadays tend to include employability metrics), and students want to have something that distinguishes them from the crowd. But in the process, education suffers. Learning is not really the focus of HE, and we're just churning out graduates who are good at spotting what is needed to get a good grade and doing just that.
We're assessing far too much, and it destroys what I think universities are all about: expanding your horizons, applying your knowledge and curiosity to interesting problems, be able to fail tasks without jeopardising your future career, and generally maturing and learning stuff.
Assessing Essays
Marking essays is a soul-destroying task, as you have too little time to spend on each essay, and you have a large pile of essays to process. Most students spend days and weeks on preparing their essays, so it always feels wrong to read and assess them in about half an hour, including writing up your feedback. This is very unsatisfactory, but otherwise one simply cannot turn around the marking in the allocated time.
But the worst thing about marking is its reductionist nature. An essay is a complex piece of writing, comprised of style, argument, expression of knowledge, understanding, interpretation, analysis, discussion etc etc. And all these different dimensions get conflated into a single point on a one-dimensional scale: a grade between about 40 and 70. This is just not right.
Many essays end up having the same numerical grade assigned to it, but they are not really comparable. One student might write eloquently but superficially, another provides deep insights with terrible grammar. One student might have a great idea, but not much understanding of the underlying concepts. Another one has solidly learned all what was required but lacks the creativity to apply the principles to a given problem. Yet, they all get the same numerical value. Different feedback, sure, but that does not really count for anything.
School children get detailed reports (besides a few simple letter grades), but in HE there are simply not the resources to do this, as there are too few staff and too many students, unless you are in Oxbridge. In principle that should not be an insurmountable problem.
You're doing it wrong!
After grading a mathematical crime is committed: the numerical grades are added up and averaged. This is simply not possible. The numbers are not numbers, they are labels that look like numbers. If we assigned the essays letters, then it would be more obvious: what is the average of A and B? But that is a completely different issue to be discussed on another day...
Essentially, then, after a lot of adding and averaging, the whole three years a student spends at university is reduced to a single label again, the degree classification. This is again an enormous reduction of a multitude of information into a single point out of four. And this point decides what possible career a student can then pursue...
As the degree class is so important (and expensive, especially for the incoming cohorts of students), this tends to be at the forefront of students' minds. This is of course a wild generalisation, and there are many exceptions, but from my experience a lot of students are primarily interested in getting good grades. Learning becomes secondary, and only the means to the end of achieving good grades. That means, curiosity, a central ingredient for successful learning, suffers, or rather, is redirected into finding out how to get grades. One cannot really blame students for trying to game the system, which is essentially what they learn to do in the end.
It does work elsewhere...
Postgraduate work is different, though, as it is less regimented. And, more importantly I think, PhD students do not get a grade. It's pass or fail. You either get a PhD, or you don't. There are of course, differences: you could get through with major corrections, minor correction, or no corrections. But nobody will know whether you scraped through with a 'revise and re-submit' or sailed through without any required corrections. If it works for PhDs, why not for UGs as well?
The problem is that we've got too many undergraduates, so there needs to be some differentiation. But why? Who wants it? Presumably those who employ graduates, so that they can see who is better or worse. But does the degree class really reflect vocational ability? I would doubt that, but in the end it is just another filter to reduce the 52 applications for each graduate job to a manageable number. With PhDs this is not so much of an issue, as you can get a more rounded picture by looking at their previous grades or even publications.
In-Conclusion
So what is the solution to this dilemma? It basically requires a system change, which is probably not feasible. Employers want differentiation, universities want to climb up the league tables (which nowadays tend to include employability metrics), and students want to have something that distinguishes them from the crowd. But in the process, education suffers. Learning is not really the focus of HE, and we're just churning out graduates who are good at spotting what is needed to get a good grade and doing just that.
We're assessing far too much, and it destroys what I think universities are all about: expanding your horizons, applying your knowledge and curiosity to interesting problems, be able to fail tasks without jeopardising your future career, and generally maturing and learning stuff.
Thursday 12 January 2012
On-line Education?
I have last term completed the on-line module in Artificial Intelligence offered by Stanford's Sebastian Thrun and Google's Peter Norvig–both top-academics and experts in their field. I guess it was successful, as I received a grade of 79% (a 'first' in UK terms, but I have the suspicion it doesn't work like that). Given the minimal effort I put in (mainly due to lack of time) I could very likely have achieved a better result with some extra work. But with a full-time job it's not so easy to put aside 10 hours a week for doing so, which was the amount of time recommended by the course leaders.
So I got 79%, but did I learn anything? And do the 79% reflect my achievements? And what was the overall learning experience like?
First, the learning experience: the module was delivered as a series of short low-tech video lectures, interspersed with multiple-choice or number-entry quizzes. Then there was homework (multiple-choice and number-entry quizzes) and a mid-term and final exam (both multiple choice and... you get the idea).
The lectures were interesting: it was a camera view top-down on a piece of paper on which the lecturers would (hand-)write, not just a filmed 'lecture'. The tone was informal and friendly, and Thrun's charming German accent made me almost feel at home. And I also learned–from the few head-shot video sequences–that Peter Norvig likes colourful shirts.
The quizzes, however, were rather limited. There was the problem of turning quite complex material into a simple format, and also (which I found hardest) missing context. As a result the questions were often trivial side-aspects, or impossible to answer due to ambiguity (judging from the few forum posts I looked at, many other people had the same issue). You can interpret a question in many different ways, especially if you need to take into account external constraints which have not been clearly specified.
Quite often you get an answer wrong, and then look at the explanation of the proper outcome, and you think "oh right, that's how they meant it".
I quite struggled with Bayes networks, and I consistently got the wrong answers when asked how many independent parameters I would need to describe one. To this day I do not know why I need to know this. I can guess, but it wasn't really explained. Formal logic was one of the things I felt very comfortable with, as I had covered that in my own UG studies as a computational linguist, but I only got 1 out of 4 points in the final exam question, as I made one small error; based on that the subsequent answers were also wrong.
My best results were in computer vision–100%. And that's even though I'm short-sighted! But do I really understand computer vision so much better than all the other areas of AI? No. Thing is, all that was asked in the relevant quizzes was basic maths. There was a simple formula, relating various parameters such as focal length and distances to each other, and all you had to do was resolve the equation for different values and work out the result. I would have been able to do this beforehand, and didn't even learn that in the course. Still, I was assessed on it and scored 100%. But anybody with basic maths would have been able to do that, even without watching a single minute of any of the class videos.
So my first criticism is: the quizzes were not designed properly. There is a lot more one can do with multiple choice question, but Thrun and Norvig didn't do it. The assessments felt like an ad-hoc addition, along the lines of "I need a quiz now, so what could I ask?".
My second criticism is the way the scoring worked. One slight mistake, nil points. In a real exam you would get points for results which are wrong, but only because of a mistake in an intermediate step. An all-or-nothing approach is not very helpful.
Is this the future of education? Are on-line classes like this all we need? I don't think so. Apart from the implementation–it'd be easy to come up with some better quizzes–it's also quite detached. There is little direct interaction (impossible with 140,000+ students), and at times you feel a bit lost. It is obvious that this was an experiment, and as such it is not possible to expect wonderful and perfect results, but there is still a long way to go.
Did I learn anything I would not have learned from reading a book? Probably not. The main advantage for me was to have the pressure of getting through the weekly session before the hand-in date, which makes you put aside time you would otherwise spend on something else. So in that respect it is alright; and the fact that it was delivered on-line was convenient as you could choose the time when you wanted to study it. But while this is good for a supplementary course, I am glad I did have proper seminars and lectures when I went to university.
While you can't argue with a free course (you did get more than you paid for!), there is still a lot of scope for improvement for this particular type of course, an on-line distance course, and I cannot see it replacing 'proper' seminars any time soon. But it was overall an interesting experience, if only to find out what 'real' teaching should be like.
So I got 79%, but did I learn anything? And do the 79% reflect my achievements? And what was the overall learning experience like?
First, the learning experience: the module was delivered as a series of short low-tech video lectures, interspersed with multiple-choice or number-entry quizzes. Then there was homework (multiple-choice and number-entry quizzes) and a mid-term and final exam (both multiple choice and... you get the idea).
The lectures were interesting: it was a camera view top-down on a piece of paper on which the lecturers would (hand-)write, not just a filmed 'lecture'. The tone was informal and friendly, and Thrun's charming German accent made me almost feel at home. And I also learned–from the few head-shot video sequences–that Peter Norvig likes colourful shirts.
The quizzes, however, were rather limited. There was the problem of turning quite complex material into a simple format, and also (which I found hardest) missing context. As a result the questions were often trivial side-aspects, or impossible to answer due to ambiguity (judging from the few forum posts I looked at, many other people had the same issue). You can interpret a question in many different ways, especially if you need to take into account external constraints which have not been clearly specified.
Quite often you get an answer wrong, and then look at the explanation of the proper outcome, and you think "oh right, that's how they meant it".
I quite struggled with Bayes networks, and I consistently got the wrong answers when asked how many independent parameters I would need to describe one. To this day I do not know why I need to know this. I can guess, but it wasn't really explained. Formal logic was one of the things I felt very comfortable with, as I had covered that in my own UG studies as a computational linguist, but I only got 1 out of 4 points in the final exam question, as I made one small error; based on that the subsequent answers were also wrong.
My best results were in computer vision–100%. And that's even though I'm short-sighted! But do I really understand computer vision so much better than all the other areas of AI? No. Thing is, all that was asked in the relevant quizzes was basic maths. There was a simple formula, relating various parameters such as focal length and distances to each other, and all you had to do was resolve the equation for different values and work out the result. I would have been able to do this beforehand, and didn't even learn that in the course. Still, I was assessed on it and scored 100%. But anybody with basic maths would have been able to do that, even without watching a single minute of any of the class videos.
So my first criticism is: the quizzes were not designed properly. There is a lot more one can do with multiple choice question, but Thrun and Norvig didn't do it. The assessments felt like an ad-hoc addition, along the lines of "I need a quiz now, so what could I ask?".
My second criticism is the way the scoring worked. One slight mistake, nil points. In a real exam you would get points for results which are wrong, but only because of a mistake in an intermediate step. An all-or-nothing approach is not very helpful.
Is this the future of education? Are on-line classes like this all we need? I don't think so. Apart from the implementation–it'd be easy to come up with some better quizzes–it's also quite detached. There is little direct interaction (impossible with 140,000+ students), and at times you feel a bit lost. It is obvious that this was an experiment, and as such it is not possible to expect wonderful and perfect results, but there is still a long way to go.
Did I learn anything I would not have learned from reading a book? Probably not. The main advantage for me was to have the pressure of getting through the weekly session before the hand-in date, which makes you put aside time you would otherwise spend on something else. So in that respect it is alright; and the fact that it was delivered on-line was convenient as you could choose the time when you wanted to study it. But while this is good for a supplementary course, I am glad I did have proper seminars and lectures when I went to university.
While you can't argue with a free course (you did get more than you paid for!), there is still a lot of scope for improvement for this particular type of course, an on-line distance course, and I cannot see it replacing 'proper' seminars any time soon. But it was overall an interesting experience, if only to find out what 'real' teaching should be like.
Labels:
education,
learning,
multiple-choice,
quizzes,
teaching
Subscribe to:
Posts (Atom)