The theme of the 2019 IT Futures Conference was “Automating our future: opportunities and threats.” It was opened by Prof. Peter Mathieson, who put a positive and hopeful spin on what for some may have been a depressing morning after the election that returned Boris Johnson a “huge great stonking mandate.”
The Principal’s expectation that the university would have a significant role to play in the unfolding future was underpinned by much of what followed in the rest of the day. He noted that removal of uncertainty is good and that technology holds the key in, e.g. modernisation of online learning and the curriculum. He cited Edinburgh’s credentials in AI1 and spoke about the donation made by Baillie Gifford in supporting the Edinburgh Futures Institute, which offers such opportunity to the city, the University and the country.
Science fiction writer Charlie Stross read us his wittily titled prepared talk, “Artificial Intelligence: Threat or Menace?” which was lightweight, lightly entertaining2 and entirely inconsequential in comparison with Alan Bundy’s excellent “brief history” of AI which was very well pitched for the audience and delivered the forecast that the future of AI is in hybrid systems. Alan included specific acknowledgement of Turing’s 1950 work, 3 one of my all-time favourite papers, which I now have on my Christmas reading list.
A project on automated subtitling was presented by two of the officers involved. They concluded that such efforts in inclusion and accessibility should involve students, not least because of the employment opportunity such activity presents but also because the students are more effective than the automated process alone - another example of hybrid solutions in the application of AI.
Professor Jane Hillston, head of the School of Informatics, spoke to us on the theme of “AI for good”. Her talk included plans for a centre of that name. Speaking on data ethics, Jane referenced the short film Slaughterbots as one vision of the application of AI:
Randall Munroe’s vision is no less concerning XKCD:
I found Jane’s talk to take a distinct political tone, as academic leaders do sometimes when justifying their positions and the funding they require, but I must not be cynical about this because as a society, we certainly do need a voice to advocate for the responsible application of AI and her new centre might just be that.
After a networking lunch, Kobi Gal presented similar messages to those he presented at a seminar I attended in February, namely that students online don’t like participating in forums, and that collaborative tools like Nota Bene can go some way to mitigating that problem. I found myself disappointed in this talk for the same reasons I was disappointed in February: despite this being presented as an example of modern and well-funded research, it still is a manifestly amateur and ill-informed effort that “betrays a gap in understanding of pedagogy that could easily be addressed through dialogue with educators.”4 I am personally surprised, knowing what outstanding work is being done in and around AI, that this work is sustainable. One of my former PhD supervisors, Dr. Hamish MacLeod took the role of “discussant” and asked some interesting questions that avoided critique of Kobi’s talk, instead focusing on some important general matters such as the limitations of AI for identifying the “teachable moment”5, and the danger of what Hamish called “the tyranny of the majority” in which the needs of the individual learner are ignored6.
The Law School’s Burkhard Schafer told a story of how a German police response to legitimate protest was exposed through a paper trail that revealed how they had taken decisions and actions that were in effect, oppression of civil rights. Burkhard’s warning for citizen-state interaction is that AI will not leave such a trace and so cannot be monitored.
Can you imagine this implemented in deep networks? XKCD again
Michael Gallagher and Markus Breines presented initial findings from their research in the use of automated agents in teaching which focused more on automation of workflow and processes than machine learning. They make use of several automated agents such as IFTTT, Zapier, Google Scholar and TeacherBot and are asking questions about what the teacher function is. These researchers seem to right on the money with their work, as they seem to understand the pedagogical and social implcations of their work, citing the Chinese social credit7 innovation as an example of what’s happening in the world with automation systems, and drawing on the 2016 Manifesto for Teaching Online for inspiration[plan]. I can hardly do justice to the excitement I felt in this talk and the restoration of my faith in high quality academic work being done here at Edinburgh. If you have ideas for them to look at in their study, they are open to suggestions at bit.ly/edteacher.
The final presentation of the day was as exciting and uplifting as anything I could have asked for today. It was given by the new DDI boss, Jarmo Eskelinen, who gave us an update and details on DDI and the Edinburgh City Deal.
The University’s CIO, Gavin McLachlan, closed the conference with his summary of the day, suggesting that we may be entering a new Scottish Enlightenment, which is perhaps a little hyperbolic but is an understandable claim, given Jarmo’s intentions for DDI of making Edinburgh the “Data Capital of Europe (only Europe doesn’t know it yet)”. From what I’ve seen at other recent events, we’re already on our way.
Edinburgh was the second University in the world to offer a course in AI, after Stanford. ↩
The tweetable quote from Charlie’s talk was, “Emergent culture does not have to make sense.” ↩
Turing, A. M. (1950) ‘Computing Machinery and Intelligence’, Mind. Oxford University Press, LIX(236), pp. 433–460. doi: 10.1093/mind/LIX.236.433. ↩
see Havinghurst, R. J. (1952) Human Development and Education, p.5. ↩
Kobi’s tweetable quote: “… you can’t be an ex-physicist. It’s like a religion.” ↩