All staff at the university where I work are required to keep up to date with things that matter to our students, our reputation and ourselves. Data protection is one of these things, and we have, in time for the new academic session, an updated self-service training course on it.
The course starts with a couple of case studies which cite the actuallly, not very large fines, relatively speaking, levied on public bodies for failures leading to potential data loss. The failures are things like losing a laptop; not using basic encryption of personal data; and not knowing the difference between
bcc. I am old enough to know what the letters stand for and why. The greater rationale is, of course, to look after people by being careful with what you know, write, and say about them, including what might be inferred from partial data. Social media, especially since chatGPT and other examples of how pieces can be assembled by machines, has helped raise awareness of what may go wrong when information is shared and connected.
I have done this course before, and think I know a lot about it but it was good to be reminded that personal data does not have to be in an electronic form to be protected. We interview often, and are take great care with notes we make in the process, ever mindful of freedom of information rights of access. The course gives a list of specific examples, in our context, of what data can be considered personal data. The list ranges from the obvious (student names and numbers) to the less so, like GPS data embedded in a digital photo. “Special categories” of data relate to possibly private information, which might be usable to discriminate such as union membership, religious beliefs or political opinions. Biometric data is included in this category.
Data protection principles
Six principles of data protection sit within an overarching principle of accountability. These principles require staff to act lawfully, fairly and openly; to be deliberate in how data is used, complying with privacy notices; to collect only the data needed to do the task it is needed for; to keep it accurate and up to date; to anonymise if possible and keep it only while it is needed (archiving is OK if it is in the public or research interest); and to keep data securely.
People have rights when it comes to their data, including the the right to be informed (that you have data about them and why you need it); the right to see a copy of what you have about them; to have a copy in a form they can use; to have it corrected if it’s wrong; to prevent use of the data pending other actions; to “opt out” of things like profiling for marketing. There is also a right to have decisions about them made by humans and not based on automated processing. There is no right to be forgotten, however. The right to have data removed is only available under certain circumstances.
When it comes to marketing, the sleazy term “legitimate interest” applies to allow anyone to use data about people to target them for marketing and advertising purposes, provided that they take certain steps like including “unsubscribe” links in emails. This is why we all get calls, spam and junk all the time.
There is information in the course that goes into quite a lot of detail about policy and legal aspects that are not at all relevant to the roles I hold, for example relating to “The Adequacy Decision in International Transfers”, matters of executive responsibility and legal acronymic phenomena like “SDPC” or “ACOC”, which I found funny, if only for their lack of definition in the course materials.
Procedures for what to do on discovery of a data breach are described and clearly understandable.
As a doctoral researcher, I will be required to undergo further training in data protection for researchers at the appropriate time.
Missing from this course
I was surprised by how much was missing from this course. It focuses on administrative, legal and marketing aspects of what a university does corporately, but fails to address the threat surfaces presented to and by staff. The case studies in the introduction signalled this but it is conspicuously missing. My colleagues and I would greatly reduce the risk of these threats with a little education, for example, what is the difference between
bcc and why does it matter?
The risks presented by the university IT systems infrastructure, relying as it does on the “nobody-ever-got-fired-for-buying” Microsoft tools and software, would be substantially diminished by such education and sharing of good practices. Why we are not using products like Wire I do not know. Unofficially, of course, you do find things being done to circumvent the clunky and dysfunctional official software (Learn Ultra, anyone? “simple, modern and intuitive” it is not). Despite the risks, ironically, these unofficial actions continue to protect the university and its community by providing greater security, coherence and even functionality than the official systems.