Save the date: FinEd is arranging a preseminar in connection with the FERA Conference on Education 2025 at the University of Lapland in Rovaniemi, on Wednesday 5 November 2025 from 10AM – 4:30PM.
The preseminar, entitled Using Generative AI in Educational Research – Do’s, Don’ts, and How to Do It, will comprise a keynote speech from Prof. Petri Nokelainen (TAU) in the morning and practical workshopping in the afternoon. Participation in the preseminar is free, but registration for the preseminar will be via the conference website (deadline for early-bird registration 22 September 2025 edit! deadline extended to 30 September). Brief description of the preseminar and the abstract of Prof. Nokelainen’s keynote speech below. The keynote will be arranged in hybrid form to enable online attendance, more detailed information about the workshops will be available soon.
Using Generative AI in Educational Research – Do’s, Don’ts, and How to Do It
Artificial Intelligence, or AI, has in the past couple of years developed at an exponential rate from a search engine to a generative research tool. Institutions as well as individual researchers and students may experience uncertainty as to what does and what does not constitute acceptable AI use in research. The FinEd preseminar delves into issues related to ethical and productive use of AI, ranging from current rules and guidelines in Finnish universities to hands-on exploration of key AI tools, with a keynote speech in the morning and practical workshopping in the afternoon. The preseminar is targetted particularly for PhD researchers, but post-docs and senior researchers are also welcome to participate.
AI Literacy in Doctoral Research: Challenges and Opportunities ~ Keynote speech, Prof. Petri Nokelainen (TAU)
This presentation examines the role of artificial intelligence (AI), particularly generative AI, in doctoral research within the Finnish educational sciences. Using Tampere University’s Education and Society doctoral programme as a case, the presentation situates AI use within FinEd, the national doctoral education network.
A distinction is made between everyday “ordinary” AI tools and generative large language models (LLMs), which became widely accessible only in late 2022. Existing European guidelines on responsible AI use are reviewed, alongside Tampere University’s doctoral regulations and the programme-specific guidelines developed for supervisors and students. These specify permitted uses (e.g., language editing, translation, formatting, reference management), prohibited uses (outsourcing reasoning, unverified content, handling sensitive data), and the requirement to disclose AI use both in the dissertation text and through a reporting form.
The dissertation process — supervision, external pre-examination, and the public defence — is highlighted as a key arena where originality, authorship, and integrity must be safeguarded. Doctoral researchers must retain full ownership of their work, particularly in the oral defence where no technological assistance is possible.
Finally, the presentation foregrounds the equity challenge: doctoral researchers differ not only in access to AI tools but also in levels of AI literacy, from basic grammar-checking to advanced prompting and programmatic use. Generative AI should neither be mandatory for success nor a substitute for scholarly expertise. Looking ahead, the question is whether doctoral education should include structured training in AI literacy to ensure integrity, ownership, and fairness in an AI-rich research landscape.

