FreshRSS

Zobrazení pro čtení

Jsou dostupné nové články, klikněte pro obnovení stránky.

Empowering undergraduate computer science students to shape generative AI research

As use of generative artificial intelligence (or generative AI) tools such as ChatGPT, GitHub Copilot, or Gemini becomes more widespread, educators are thinking carefully about the place of these tools in their classrooms. For undergraduate education, there are concerns about the role of generative AI tools in supporting teaching and assessment practices. For undergraduate computer science (CS) students, generative AI also has implications for their future career trajectories, as it is likely to be relevant across many fields.

Dr Stephen MacNeil, Andrew Tran, and Irene Hou (Temple University)

In a recent seminar in our current series on teaching programming (with or without AI), we were delighted to be joined by Dr Stephen MacNeil, Andrew Tran, and Irene Hou from Temple University. Their talk showcased several research projects involving generative AI in undergraduate education, and explored how undergraduate research projects can create agency for students in navigating the implications of generative AI in their professional lives.

Differing perceptions of generative AI

Stephen began by discussing the media coverage around generative AI. He highlighted the binary distinction between media representations of generative AI as signalling the end of higher education — including programming in CS courses — and other representations that highlight the issues that using generative AI will solve for educators, such as improving access to high-quality help (specifically, virtual assistance) or personalised learning experiences.

Students sitting in a lecture at a university.

As part of a recent ITiCSE working group, Stephen and colleagues conducted a survey of undergraduate CS students and educators and found conflicting views about the perceived benefits and drawbacks of generative AI in computing education. Despite this divide, most CS educators reported that they were planning to incorporate generative AI tools into their courses. Conflicting views were also noted between students and educators on what is allowed in terms of generative AI tools and whether their universities had clear policies around their use.

The role of generative AI tools in students’ help-seeking

There is growing interest in how undergraduate CS students are using generative AI tools. Irene presented a study in which her team explored the effect of generative AI on undergraduate CS students’ help-seeking preferences. Help-seeking can be understood as any actions or strategies undertaken by students to receive assistance when encountering problems. Help-seeking is an important part of the learning process, as it requires metacognitive awareness to understand that a problem exists that requires external help. Previous research has indicated that instructors, teaching assistants, student peers, and online resources (such as YouTube and Stack Overflow) can assist CS students. However, as generative AI tools are now widely available to assist in some tasks (such as debugging code), Irene and her team wanted to understand which resources students valued most, and which factors influenced their preferences. Their study consisted of a survey of 47 students, and follow-up interviews with 8 additional students. 

Undergraduate CS student use of help-seeking resources

Responding to the survey, students stated that they used online searches or support from friends/peers more frequently than two generative AI tools, ChatGPT and GitHub Copilot; however, Irene indicated that as data collection took place at the beginning of summer 2023, it is possible that students were not familiar with these tools or had not used them yet. In terms of students’ experiences in seeking help, students found online searches and ChatGPT were faster and more convenient, though they felt these resources led to less trustworthy or lower-quality support than seeking help from instructors or teaching assistants.

Two undergraduate students are seated at a desk, collaborating on a computing task.

Some students felt more comfortable seeking help from ChatGPT than peers as there were fewer social pressures. Comparing generative AI tools and online searches, one student highlighted that unlike Stack Overflow, solutions generated using ChatGPT and GitHub Copilot could not be verified by experts or other users. Students who received the most value from using ChatGPT in seeking help either (i) prompted the model effectively when requesting help or (ii) viewed ChatGPT as a search engine or comprehensive resource that could point them in the right direction. Irene cautioned that some students struggled to use generative AI tools effectively as they had limited understanding of how to write effective prompts.

Using generative AI tools to produce code explanations

Andrew presented a study where the usefulness of different types of code explanations generated by a large language model was evaluated by students in a web software development course. Based on Likert scale data, they found that line-by-line explanations were less useful for students than high-level summary or concept explanations, but that line-by-line explanations were most popular. They also found that explanations were less useful when students already knew what the code did. Andrew and his team then qualitatively analysed code explanations that had been given a low rating and found they were overly detailed (i.e. focusing on superfluous elements of the code), the explanation given was the wrong type, or the explanation mixed code with explanatory text. Despite the flaws of some explanations, they concluded that students found explanations relevant and useful to their learning.

Perceived usefulness of code explanation types

Using generative AI tools to create multiple choice questions

In a separate study, Andrew and his team investigated the use of ChatGPT to generate novel multiple choice questions for computing courses. The researchers prompted two models, GPT-3 and GPT-4, with example question stems to generate correct answers and distractors (incorrect but plausible choices). Across two data sets of example questions, GPT-4 significantly outperformed GPT-3 in generating the correct answer (75.3% and 90% vs 30.8% and 36.7% of all cases). GPT-3 performed less well at providing the correct answer when faced with negatively worded questions. Both models generated correct answers as distractors across both sets of example questions (GPT-3: 11.1% and 10% of cases; GPT-4: 9.9% and 17.8%). They concluded that educators would still need to verify whether answers were correct and distractors were appropriate.

An undergraduate student is raising his hand up during a lecture at a university.

Undergraduate students shaping the direction of generative AI research

With student concerns about generative AI and its implications for the world of work, the seminar ended with a hopeful message highlighting undergraduate students being proactive in conducting their own research and shaping the direction of generative AI research in computer science education. Stephen concluded the seminar by celebrating the undergraduate students who are undertaking these research projects.

You can watch the seminar here:

If you are interested to learn more about Stephen’s work on generative AI, you can read about how undergraduate students used generative AI tools to create analogies for recursion. If you would like to experiment with using generative AI tools to assist with debugging, you could try using Gemini, ChatGPT, or Copilot.

Join our next seminar

Our current seminar series is on teaching programming with or without AI. 

In our next seminar, on 16 July at 17:00 to 18:30 BST, we welcome Laurie Gale (Raspberry Pi Computing Education Research Centre, University of Cambridge), who will discuss how to teach debugging to secondary school students. To take part in the seminar, click the button below to sign up, and we will send you information about how to join. We hope to see you there.

The schedule of our upcoming seminars is available online. You can catch up on past seminars on our blog and on the previous seminars and recordings page.

The post Empowering undergraduate computer science students to shape generative AI research appeared first on Raspberry Pi Foundation.

Imagining students’ progression in the era of generative AI

Generative artificial intelligence (AI) tools are becoming more easily accessible to learners and educators, and increasingly better at generating code solutions to programming tasks, code explanations, computing lesson plans, and other learning resources. This raises many questions for educators in terms of what and how we teach students about computing and AI, and AI’s impact on assessment, plagiarism, and learning objectives.

Brett Becker.

We were honoured to have Professor Brett Becker (University College Dublin) join us as part of our ‘Teaching programming (with or without AI)’ seminar series. He is uniquely placed to comment on teaching computing using AI tools, having been involved in many initiatives relevant to computing education at different levels, in Ireland and beyond.

In a computing classroom, two girls concentrate on their programming task.

Brett’s talk focused on what educators and education systems need to do to prepare all students — not just those studying Computing — so that they are equipped with sufficient knowledge about AI to make their way from primary school to secondary and beyond, whether it be university, technical qualifications, or work.

How do AI tools currently perform?

Brett began his talk by illustrating the increase in performance of large language models (LLMs) in solving first-year undergraduate programming exercises: he compared the findings from two recent studies he was involved in as part of an ITiCSE Working Group. In the first study — from 2021 — the results generated by GPT-3 were similar to those of students in the top quartile. By the second study in 2023, GPT-4’s performance matched that of a top student (Figure 1).

A graph comparing exam scores.

Figure 1: Student scores on Exam 1 and Exam 2, represented by circles. GPT-3’s 2021 score is represented by the blue ‘x’, and GPT-4’s 2023 score on the same questions is represented by the red ‘x’.

Brett also explained that the study found some models were capable of solving current undergraduate programming assessments almost error-free, and could solve the Irish Leaving Certificate and UK A level Computer Science exams.

What are challenges and opportunities for education?

This level of performance raises many questions for computing educators about what is taught and how to assess students’ learning. To address this, Brett referred to his 2023 paper, which included findings from a literature review and a survey on students’ and instructors’ attitudes towards using LLMs in computing education. This analysis has helped him identify several opportunities as well as the ethical challenges education systems face regarding generative AI. 

The opportunities include: 

  • The generation of unique content, lesson plans, programming tasks, or feedback to help educators with workload and productivity
  • More accessible content and tools generated by AI apps to make Computing more broadly accessible to more students
  • More engaging and meaningful student learning experiences, including using generative AI to enable creativity and using conversational agents to augment students’ learning
  • The impact on assessment practices, both in terms of automating the marking of current assessments as well as reconsidering what is assessed and how

Some of the challenges include:

  • The lack of reliability and accuracy of outputs from generative AI tools
  • The need to educate everyone about AI to create a baseline level of understanding
  • The legal and ethical implications of using AI in computing education and beyond
  • How to deal with questionable or even intentionally harmful uses of AI and mitigating the consequences of such uses

Programming as a basic skill for all subjects

Next, Brett talked about concrete actions that he thinks we need to take in response to these opportunities and challenges. 

He emphasised our responsibility to keep students safe. One way to do this is to empower all students with a baseline level of knowledge about AI, at an age-appropriate level, to enable them to keep themselves safe. 

Secondary school age learners in a computing classroom.

He also discussed the increased relevance of programming to all subjects, not only Computing, in a similar way to how reading and mathematics transcend the boundaries of their subjects, and the need he sees to adapt subjects and curricula to that effect. 

As an example of how rapidly curricula may need to change with increasing AI use by students, Brett looked at the Irish Computer science specification for “senior cycle” (final two years of second-level, ages 16–18). This curriculum was developed in 2018 and remains a strong computing curriculum in Brett’s opinion. However, he pointed out that it only contains a single learning outcome on AI. 

To help educators bridge this gap, in the book Brett wrote alongside Keith Quille to accompany the curriculum, they included two chapters dedicated to AI, machine learning, and ethics and computing. Brett believes these types of additional resources may be instrumental for teaching and learning about AI as resources are more adaptable and easier to update than curricula. 

Generative AI in computing education

Taking the opportunity to use generative AI to reimagine new types of programming problems, Brett and colleagues have developed Promptly, a tool that allows students to practise prompting AI code generators. This tool provides a combined approach to learning about generative AI while learning programming with an AI tool. 

Promptly is intended to help students learn how to write effective prompts. It encourages students to specify and decompose the programming problem they want to solve, read the code generated, compare it with test cases to discern why it is failing (if it is), and then update their prompt accordingly (Figure 2). 

An example of the Promptly interface.

Figure 2: Example of a student’s use of Promptly.

Early undergraduate student feedback points to Promptly being a useful way to teach programming concepts and encourage metacognitive programming skills. The tool is further described in a paper, and whilst the initial evaluation was aimed at undergraduate students, Brett positioned it as a secondary school–level tool as well. 

Brett hopes that by using generative AI tools like this, it will be possible to better equip a larger and more diverse pool of students to engage with computing.

Re-examining the concept of programming

Brett concluded his seminar by broadening the relevance of programming to all learners, while challenging us to expand our perspectives of what programming is. If we define programming as a way of prompting a machine to get an output, LLMs allow all of us to do so without the need for learning the syntax of traditional programming languages. Taking that view, Brett left us with a question to consider: “How do we prepare for this from an educational perspective?”

You can watch Brett’s presentation here:

Join our next seminar

The focus of our ongoing seminar series is on teaching programming with or without AI. 

For our next seminar on Tuesday 11 June at 17:00 to 18:30 GMT, we’re joined by Veronica Cucuiat (Raspberry Pi Foundation), who will talk about whether LLMs could be employed to help understand programming error messages, which can present a significant obstacle to anyone new to coding, especially young people.  

To take part in the seminar, click the button below to sign up, and we will send you information about how to join. We hope to see you there.

The schedule of our upcoming seminars is online. You can catch up on past seminars on our blog and on the previous seminars and recordings page.

The post Imagining students’ progression in the era of generative AI appeared first on Raspberry Pi Foundation.

An update from the Raspberry Pi Computing Education Research Centre

It’s been nearly two years since the launch of the Raspberry Pi Computing Education Research Centre. Today, the Centre’s Director Dr Sue Sentance shares an update about the Centre’s work.

The Raspberry Pi Computing Education Research Centre (RPCERC) is unique for two reasons: we are a joint initiative between the University of Cambridge and the Raspberry Pi Foundation, with a team that spans both; and we focus exclusively on the teaching and learning of computing to young people, from their early years to the end of formal education.

Educators and researchers mingle at a conference.
At the RPCERC launch in July 2022

We’ve been very busy at the RPCERC since we held our formal launch event in July 2022. We would love everyone who follows the Raspberry Pi Foundation’s work to keep an eye on what we are up to too: you can do that by checking out our website and signing up to our termly newsletter

What does the RPCERC do?

As the name implies, our work is focused on research into computing education and all our research projects align to one of the following themes:

  • AI education
  • Broadening participation in computing
  • Computing around the world
  • Pedagogy and the teaching of computing
  • Physical computing
  • Programming education

These themes encompass substantial research questions, so it’s clear we have a lot to do! We have only been established for a few years, but we’ve made a good start and are grateful to those who have funded additional projects that we are working on.

A student in a computing classroom.

In our work, we endeavour to maintain two key principles that are hugely important to us: sharing our work widely and working collaboratively. We strive to engage in the highest quality rigorous research, and to publish in academic venues. However, we make sure these are available openly for those outside academia. We also favour research that is participatory and collaborative, so we work closely with teachers and other stakeholders. 

Within our six themes we are running a number of projects, and I’ll outline a few of these here.

Exploring physical computing in primary schools

Physical computing is more engaging than simply learning programming and computing skills on screen because children can build interactive and tangible artefacts that exist in the real world. But does this kind of engagement have any lasting impact? Do positive experiences with technology lead to more confidence and creativity later on? These are just some of the questions we aim to answer.

Three young people working on a computing project.

We are delighted to be starting a new longitudinal project investigating the experience of young people who have engaged with the BBC micro:bit and other physical computing devices. We aim to develop insights into changes in attitudes, agency, and creativity at key points as students progress from primary through to secondary education in the UK. 

To do this, we will be following a cohort of children over the course of five years — as they transition from primary school to secondary school — to give us deeper insights into the longer-term impact of working with physical computing than has been possible previously with shorter projects. This longer-term project has been made possible through a generous donation from the Micro:bit Educational Foundation, the BBC, and Nominet. 

Do follow our research to see what we find out!

Generative AI for computing teachers

We are conducting a range of projects in the general area of artificial intelligence (AI), looking both at how to teach and learn AI, and how to learn programming with the help of AI. In our work, we often use the SEAME framework to simplify and categorise aspects of the teaching and learning of AI. However, for many teachers, it’s the use of AI that has generated the most interest for them, both for general productivity and for innovative ways of teaching and learning. 

A group of students and a teacher at the Coding Academy in Telangana.

In one of our AI-related projects, we have been working with a group of computing teachers and the Faculty of Education to develop guidance for schools on how generative AI can be useful in the context of computing teaching. Computing teachers are at the forefront of this potential revolution for school education, so we’ve enjoyed the opportunity to set up this researcher–teacher working group to investigate these issues. We hope to be publishing our guidance in June — again watch this space!

Culturally responsive computing teaching

We’ve carried out a few different projects in the last few years around culturally responsive computing teaching in schools, which to our knowledge are unique for the UK setting. Much of the work on culturally responsive teaching and culturally relevant pedagogy (which stem from different theoretical bases) has been conducted in the USA, and we believe we are the only research team in the UK working on the implications of culturally relevant pedagogy research for computing teaching here. 

Two young people learning together at a laptop.

In one of our studies, we worked with a group of teachers in secondary and primary schools to explore ways in which they could develop and reflect on the meaning of culturally responsive computing teaching in their context. We’ve published on this work, and also produced a technical report describing the whole project. 

In another project, we worked with primary teachers to explore how existing resources could be adapted to be appropriate for their specific context and children. These projects have been funded by Cognizant and Google. 

‘Core’ projects

As well as research that is externally funded, it’s important that we work on more long-term projects that build on our research expertise and where we feel we can make a contribution to the wider community. 

We have four projects that I would put into this category:

  1. Teacher research projects
    This year, we’ve been running a project called Teaching Inquiry in Computing Education, which supports teachers to carry out their own research in the classroom.
  2. Computing around the world
    Following on from our survey of UK and Ireland computing teachers and earlier work on surveying teachers in Africa and globally, we are developing a broader picture of how computing education in school is growing around the world. Watch this space for more details.
  3. PRIMM
    We devised the Predict–Run–Investigate–Modify–Make lesson structure for programming a few years ago and continue to research in this area.
  4. LCT semantic wave theory
    Together with universities in London and Australia, we are exploring ways in which computing education can draw on legitimation code theory (LCT)

We are currently looking for a research associate to lead on one or more of these core projects, so if you’re interested, get in touch. 

Developing new computing education researchers

One of our most important goals is to support new researchers in computing education, and this involves recruiting and training PhD students. During 2022–2023, we welcomed our very first PhD students, Laurie Gale and Salomey Afua Addo, and we will be saying hello to two more in October 2024. PhD students are an integral part of RPCERC, and make a great contribution across the team, as well as focusing on their own particular area of interest in depth. Laurie and Salomey have also been out and about visiting local schools too. 

A person with glasses smiles.
Laurie Gale
A person with twists smiles.
Salomey Afua Addo

Laurie’s PhD study focuses on debugging, a key element of programming education. He is looking at lower secondary school students’ attitudes to debugging, their debugging behaviour, and how to teach debugging. If you’d like to take part in Laurie’s research, you can contact us at [email protected].

Salomey’s work is in the area of AI education in K–12 and spans the UK and Ghana. Her first study considered the motivation of teachers in the UK to teach AI and she has spent some weeks in Ghana conducting a case study on the way in which Ghana implemented AI into the curriculum in 2020.

Thanks!

We are very grateful to the Raspberry Pi Foundation for providing a donation which established the RPCERC and has given us financial security for the next few years. We’d also like to express our thanks for other donations and project funding we’ve received from Google, Google DeepMind, the Micro:bit Educational Foundation, BBC, and Nominet. If you would like to work with us, please drop us a line at [email protected].

The post An update from the Raspberry Pi Computing Education Research Centre appeared first on Raspberry Pi Foundation.

Insights into students’ attitudes to using AI tools in programming education

Educators around the world are grappling with the problem of whether to use artificial intelligence (AI) tools in the classroom. As more and more teachers start exploring the ways to use these tools for teaching and learning computing, there is an urgent need to understand the impact of their use to make sure they do not exacerbate the digital divide and leave some students behind.

A teenager learning computer science.

Sri Yash Tadimalla from the University of North Carolina and Dr Mary Lou Maher, Director of Research Community Initiatives at the Computing Research Association, are exploring how student identities affect their interaction with AI tools and their perceptions of the use of AI tools. They presented findings from two of their research projects in our March seminar.

How students interact with AI tools 

A common approach in research is to begin with a preliminary study involving a small group of participants in order to test a hypothesis, ways of collecting data from participants, and an intervention. Yash explained that this was the approach they took with a group of 25 undergraduate students on an introductory Java programming course. The research observed the students as they performed a set of programming tasks using an AI chatbot tool (ChatGPT) or an AI code generator tool (GitHub Copilot). 

The data analysis uncovered five emergent attitudes of students using AI tools to complete programming tasks: 

  • Highly confident students rely heavily on AI tools and are confident about the quality of the code generated by the tool without verifying it
  • Cautious students are careful in their use of AI tools and verify the accuracy of the code produced
  • Curious students are interested in exploring the capabilities of the AI tool and are likely to experiment with different prompts 
  • Frustrated students struggle with using the AI tool to complete the task and are likely to give up 
  • Innovative students use the AI tool in creative ways, for example to generate code for other programming tasks

Whether these attitudes are common for other and larger groups of students requires more research. However, these preliminary groupings may be useful for educators who want to understand their students and how to support them with targeted instructional techniques. For example, highly confident students may need encouragement to check the accuracy of AI-generated code, while frustrated students may need assistance to use the AI tools to complete programming tasks.

An intersectional approach to investigating student attitudes

Yash and Mary Lou explained that their next research study took an intersectional approach to student identity. Intersectionality is a way of exploring identity using more than one defining characteristic, such as ethnicity and gender, or education and class. Intersectional approaches acknowledge that a person’s experiences are shaped by the combination of their identity characteristics, which can sometimes confer multiple privileges or lead to multiple disadvantages.

A student in a computing classroom.

In the second research study, 50 undergraduate students participated in programming tasks and their approaches and attitudes were observed. The gathered data was analysed using intersectional groupings, such as:

  • Students who were from the first generation in their family to attend university and female
  • Students who were from an underrepresented ethnic group and female 

Although the researchers observed differences amongst the groups of students, there was not enough data to determine whether these differences were statistically significant.

Who thinks using AI tools should be considered cheating? 

Participating students were also asked about their views on using AI tools, such as “Did having AI help you in the process of programming?” and “Does your experience with using this AI tool motivate you to continue learning more about programming?”

The same intersectional approach was taken towards analysing students’ answers. One surprising finding stood out: when asked whether using AI tools to help with programming tasks should be considered cheating, students from more privileged backgrounds agreed that this was true, whilst students with less privilege disagreed and said it was not cheating.

This finding is only with a very small group of students at a single university, but Yash and Mary Lou called for other researchers to replicate this study with other groups of students to investigate further. 

You can watch the full seminar here:

Acknowledging differences to prevent deepening divides

As researchers and educators, we often hear that we should educate students about the importance of making AI ethical, fair, and accessible to everyone. However, simply hearing this message isn’t the same as truly believing it. If students’ identities influence how they view the use of AI tools, it could affect how they engage with these tools for learning. Without recognising these differences, we risk continuing to create wider and deeper digital divides. 

Join our next seminar

The focus of our ongoing seminar series is on teaching programming with or without AI

For our next seminar on Tuesday 16 April at 17:00 to 18:30 GMT, we’re joined by Brett A. Becker (University College Dublin), who will talk about how generative AI can be used effectively in secondary school programming education and how it can be leveraged so that students can be best prepared for continuing their education or beginning their careers. To take part in the seminar, click the button below to sign up, and we will send you information about how to join. We hope to see you there.

The schedule of our upcoming seminars is online. You can catch up on past seminars on our blog and on the previous seminars and recordings page.

The post Insights into students’ attitudes to using AI tools in programming education appeared first on Raspberry Pi Foundation.

Using an AI code generator with school-age beginner programmers

AI models for general-purpose programming, such as OpenAI Codex, which powers the AI pair programming tool GitHub Copilot, have the potential to significantly impact how we teach and learn programming. 

Learner in a computing classroom.

The basis of these tools is a ‘natural language to code’ approach, also called natural language programming. This allows users to generate code using a simple text-based prompt, such as “Write a simple Python script for a number guessing game”. Programming-specific AI models are trained on vast quantities of text data, including GitHub repositories, to enable users to quickly solve coding problems using natural language. 

As a computing educator, you might ask what the potential is for using these tools in your classroom. In our latest research seminar, Majeed Kazemitabaar (University of Toronto) shared his work in developing AI-assisted coding tools to support students during Python programming tasks.

Evaluating the benefits of natural language programming

Majeed argued that natural language programming can enable students to focus on the problem-solving aspects of computing, and support them in fixing and debugging their code. However, he cautioned that students might become overdependent on the use of ‘AI assistants’ and that they might not understand what code is being outputted. Nonetheless, Majeed and colleagues were interested in exploring the impact of these code generators on students who are starting to learn programming.

Using AI code generators to support novice programmers

In one study, the team Majeed works in investigated whether students’ task and learning performance was affected by an AI code generator. They split 69 students (aged 10–17) into two groups: one group used a code generator in an environment, Coding Steps, that enabled log data to be captured, and the other group did not use the code generator.

A group of male students at the Coding Academy in Telangana.

Learners who used the code generator completed significantly more authoring tasks — where students manually write all of the code — and spent less time completing them, as well as generating significantly more correct solutions. In multiple choice questions and modifying tasks — where students were asked to modify a working program — students performed similarly whether they had access to the code generator or not. 

A test was administered a week later to check the groups’ performance, and both groups did similarly well. However, the ‘code generator’ group made significantly more errors in authoring tasks where no starter code was given. 

Majeed’s team concluded that using the code generator significantly increased the completion rate of tasks and student performance (i.e. correctness) when authoring code, and that using code generators did not lead to decreased performance when manually modifying code. 

Finally, students in the code generator group reported feeling less stressed and more eager to continue programming at the end of the study.

Student perceptions when (not) using AI code generators

Understanding how novices use AI code generators

In a related study, Majeed and his colleagues investigated how novice programmers used the code generator and whether this usage impacted their learning. Working with data from 33 learners (aged 11–17), they analysed 45 tasks completed by students to understand:

  1. The context in which the code generator was used
  2. What learners asked for
  3. How prompts were written
  4. The nature of the outputted code
  5. How learners used the outputted code 

Their analysis found that students used the code generator for the majority of task attempts (74% of cases) with far fewer tasks attempted without the code generator (26%). Of the task attempts made using the code generator, 61% involved a single prompt while only 8% involved decomposition of the task into multiple prompts for the code generator to solve subgoals; 25% used a hybrid approach — that is, some subgoal solutions being AI-generated and others manually written.

In a comparison of students against their post-test evaluation scores, there were positive though not statistically significant trends for students who used a hybrid approach (see the image below). Conversely, negative though not statistically significant trends were found for students who used a single prompt approach.

A positive correlation between hybrid programming and post-test scores

Though not statistically significant, these results suggest that the students who actively engaged with tasks — i.e. generating some subgoal solutions, manually writing others, and debugging their own written code — performed better in coding tasks.

Majeed concluded that while the data showed evidence of self-regulation, such as students writing code manually or adding to AI-generated code, students frequently used the output from single prompts in their solutions, indicating an over-reliance on the output of AI code generators.

He suggested that teachers should support novice programmers to write better quality prompts to produce better code.  

If you want to learn more, you can watch Majeed’s seminar:

You can read more about Majeed’s work on his personal website. You can also download and use the code generator Coding Steps yourself.

Join our next seminar

The focus of our ongoing seminar series is on teaching programming with or without AI. 

For our next seminar on Tuesday 16 April at 17:00–18:30 GMT, we’re joined by Brett Becker (University College Dublin), who will discuss how generative AI may be effectively utilised in secondary school programming education and how it can be leveraged so that students can be best prepared for whatever lies ahead. To take part in the seminar, click the button below to sign up, and we will send you information about joining. We hope to see you there.

The schedule of our upcoming seminars is online. You can catch up on past seminars on our previous seminars and recordings page.

The post Using an AI code generator with school-age beginner programmers appeared first on Raspberry Pi Foundation.

Supporting learners with programming tasks through AI-generated Parson’s Problems

The use of generative AI tools (e.g. ChatGPT) in education is now common among young people (see data from the UK’s Ofcom regulator). As a computing educator or researcher, you might wonder what impact generative AI tools will have on how young people learn programming. In our latest research seminar, Barbara Ericson and Xinying Hou (University of Michigan) shared insights into this topic. They presented recent studies with university student participants on using generative AI tools based on large language models (LLMs) during programming tasks. 

A girl in a university computing classroom.

Using Parson’s Problems to scaffold student code-writing tasks

Barbara and Xinying started their seminar with an overview of their earlier research into using Parson’s Problems to scaffold university students as they learn to program. Parson’s Problems (PPs) are a type of code completion problem where learners are given all the correct code to solve the coding task, but the individual lines are broken up into blocks and shown in the wrong order (Parsons and Haden, 2006). Distractor blocks, which are incorrect versions of some or all of the lines of code (i.e. versions with syntax or semantic errors), can also be included. This means to solve a PP, learners need to select the correct blocks as well as place them in the correct order.

A presentation slide defining Parson's Problems.

In one study, the research team asked whether PPs could support university students who are struggling to complete write-code tasks. In the tasks, the 11 study participants had the option to generate a PP when they encountered a challenge trying to write code from scratch, in order to help them arrive at the complete code solution. The PPs acted as scaffolding for participants who got stuck trying to write code. Solutions used in the generated PPs were derived from past student solutions collected during previous university courses. The study had promising results: participants said the PPs were helpful in completing the write-code problems, and 6 participants stated that the PPs lowered the difficulty of the problem and speeded up the problem-solving process, reducing their debugging time. Additionally, participants said that the PPs prompted them to think more deeply.

A young person codes at a Raspberry Pi computer.

This study provided further evidence that PPs can be useful in supporting students and keeping them engaged when writing code. However, some participants still had difficulty arriving at the correct code solution, even when prompted with a PP as support. The research team thinks that a possible reason for this could be that only one solution was given to the PP, the same one for all participants. Therefore, participants with a different approach in mind would likely have experienced a higher cognitive demand and would not have found that particular PP useful.

An example of a coding interface presenting adaptive Parson's Problems.

Supporting students with varying self-efficacy using PPs

To understand the impact of using PPs with different learners, the team then undertook a follow-up study asking whether PPs could specifically support students with lower computer science self-efficacy. The results show that study participants with low self-efficacy who were scaffolded with PPs support showed significantly higher practice performance and higher problem-solving efficiency compared to participants who had no scaffolding. These findings provide evidence that PPs can create a more supportive environment, particularly for students who have lower self-efficacy or difficulty solving code writing problems. Another finding was that participants with low self-efficacy were more likely to completely solve the PPs, whereas participants with higher self-efficacy only scanned or partly solved the PPs, indicating that scaffolding in the form of PPs may be redundant for some students.

Secondary school age learners in a computing classroom.

These two studies highlighted instances where PPs are more or less relevant depending on a student’s level of expertise or self-efficacy. In addition, the best PP to solve may differ from one student to another, and so having the same PP for all students to solve may be a limitation. This prompted the team to conduct their most recent study to ask how large language models (LLMs) can be leveraged to support students in code-writing practice without hindering their learning.

Generating personalised PPs using AI tools

This recent third study focused on the development of CodeTailor, a tool that uses LLMs to generate and evaluate code solutions before generating personalised PPs to scaffold students writing code. Students are encouraged to engage actively with solving problems as, unlike other AI-assisted coding tools that merely output a correct code correct solution, students must actively construct solutions using personalised PPs. The researchers were interested in whether CodeTailor could better support students to actively engage in code-writing.

An example of the CodeTailor interface presenting adaptive Parson's Problems.

In a study with 18 undergraduate students, they found that CodeTailor could generate correct solutions based on students’ incorrect code. The CodeTailor-generated solutions were more closely aligned with students’ incorrect code than common previous student solutions were. The researchers also found that most participants (88%) preferred CodeTailor to other AI-assisted coding tools when engaging with code-writing tasks. As the correct solution in CodeTailor is generated based on individual students’ existing strategy, this boosted students’ confidence in their current ideas and progress during their practice. However, some students still reported challenges around solution comprehension, potentially due to CodeTailor not providing sufficient explanation for the details in the individual code blocks of the solution to the PP. The researchers argue that text explanations could help students fully understand a program’s components, objectives, and structure. 

In future studies, the team is keen to evaluate a design of CodeTailor that generates multiple levels of natural language explanations, i.e. provides personalised explanations accompanying the PPs. They also aim to investigate the use of LLM-based AI tools to generate a self-reflection question structure that students can fill in to extend their reasoning about the solution to the PP.

Barbara and Xinying’s seminar is available to watch here: 

Find examples of PPs embedded in free interactive ebooks that Barbara and her team have developed over the years, including CSAwesome and Python for Everybody. You can also read more about the CodeTailor platform in Barbara and Xinying’s paper.

Join our next seminar

The focus of our ongoing seminar series is on teaching programming with or without AI. 

For our next seminar on Tuesday 12 March at 17:00–18:30 GMT, we’re joined by Yash Tadimalla and Prof. Mary Lou Maher (University of North Carolina at Charlotte). The two of them will share further insights into the impact of AI tools on the student experience in programming courses. To take part in the seminar, click the button below to sign up, and we will send you information about joining. We hope to see you there.

The schedule of our upcoming seminars is online. You can catch up on past seminars on our previous seminars and recordings page.

The post Supporting learners with programming tasks through AI-generated Parson’s Problems appeared first on Raspberry Pi Foundation.

Grounded cognition: physical activities and learning computing

Everyone who has taught children before will know the excited gleam in their eyes when the lessons include something to interact with physically. Whether it’s printed and painstakingly laminated flashcards, laser-cut models, or robots, learners’ motivation to engage with the topic will increase along with the noise levels in the classroom.

Two learners do physical computing in the primary school classroom.

However, these hands-on activities are often seen as merely a technique to raise interest, or a nice extra project for children to do before the ‘actual learning’ can begin. But what if this is the wrong way to think about this type of activity? 

How do children learn?

In our 2023 online research seminar series, focused on computing education for primary-aged (K–5) learners, we delved into the most recent research aimed at enhancing learning experiences for students in the earliest stages of education. From a deep dive into teaching variables to exploring the integration of computational thinking, our series has looked at the most effective ways to engage young minds in the subject of computing.

An adult on a plain background.

It’s only fitting that in our final seminar in the series, Anaclara Gerosa from the University of Glasgow tackled one of the most fundamental questions in education: how do children actually learn? Beyond the conventional methods, emerging research has been shedding light on a fascinating approach — the concept of grounded cognition. This theory suggests that children don’t merely passively absorb knowledge; they physically interact with it, quite literally ‘grasping’ concepts in the process.

Grounded cognition, also known in variations as embodied and situated cognition, offers a new perspective on how we absorb and process information. At its core, this theory suggests that all cognitive processes, including language and thought, are rooted in the body’s dynamic interactions with the environment. This notion challenges the conventional view of learning as a purely cognitive activity and highlights the impact of action and simulation.

A group of learners do physical computing in the primary school classroom.

There is evidence from many studies in psychology and pedagogy that using hands-on activities can enhance comprehension and abstraction. For instance, finger counting has been found to be essential in understanding numerical systems and mathematical concepts. A recent study in this field has shown that children who are taught basic computing concepts with unplugged methods can grasp abstract ideas from as young as 3. There is therefore an urgent need to understand exactly how we could use grounded cognition methods to teach children computing — which is arguably one of the most abstract subjects in formal education.

A recent study in this field has shown that children who are taught basic computing concepts with unplugged methods can grasp abstract ideas from as young as 3.

A new framework for teaching computing

Anaclara is part of a group of researchers at the University of Glasgow who are currently developing a new approach to structuring computing education. Their EIFFEL (Enacted Instrumented Formal Framework for Early Learning in Computing) model suggests a progression from enacted to formal activities.

Following this model, in the early years of computing education, learners would primarily engage with activities that allow them to work with tangible 3D objects or manipulate intangible objects, for instance in Scratch. Increasingly, students will be able to perform actions in an instrumented or virtual environment which will require the knowledge of abstract symbols but will not yet require the knowledge of programming languages. Eventually, students will have developed the knowledge and skills to engage in fully formal environments, such as writing advanced code.

A graph illustrating the EIFFEL model for early computing.

In a recent literature review, Anaclara and her colleagues looked at existing research into using grounded cognition theory in computing education. Although several studies report the use of grounded approaches, for instance by using block-based programming, robots, toys, or construction kits, the focus is generally on looking at how concrete objects can be used in unplugged activities due to specific contexts, such as a limited availability of computing devices.

The next steps in this area are looking at how activities that specifically follow the EIFFEL framework can enhance children’s learning. 

You can watch Anaclara’s seminar here: 

You can also access the presentation slides here.

Try grounded activities in your classroom

Research into grounded cognition activities in computer science is ongoing, but we encourage you to try incorporating more hands-on activities when teaching younger learners and observing the effects yourself. Here are a few ideas on how to get started:

Join us at our next seminar

In 2024, we are exploring different ways to teach and learn programming, with and without AI tools. In our next seminar, on 13 February at 17:00 GMT, Majeed Kazemi from the University of Toronto will be joining us to discuss whether AI-powered code generators can help K–12 students learn to program in Python. All of our online seminars are free and open to everyone. Sign up and we’ll send you the link to join on the day.

The post Grounded cognition: physical activities and learning computing appeared first on Raspberry Pi Foundation.

Integrating computational thinking into primary teaching

“Computational thinking is really about thinking, and sometimes about computing.” – Aman Yadav, Michigan State University

Young people in a coding lesson.

Computational thinking is a vital skill if you want to use a computer to solve problems that matter to you. That’s why we consider computational thinking (CT) carefully when creating learning resources here at the Raspberry Pi Foundation. However, educators are increasingly realising that CT skills don’t just apply to writing computer programs, and that CT is a fundamental approach to problem-solving that can be extended into other subject areas. To discuss how CT can be integrated beyond the computing classroom and help introduce the fundamentals of computing to primary school learners, we invited Dr Aman Yadav from Michigan State University to deliver the penultimate presentation in our seminar series on computing education for primary-aged children. 

In his presentation, Aman gave a concise tour of CT practices for teachers, and shared his findings from recent projects around how teachers perceive and integrate CT into their lessons.

Research in context

Aman began his talk by placing his team’s work within the wider context of computing education in the US. The computing education landscape Aman described is dominated by the National Science Foundation’s ambitious goal, set in 2008, to train 10,000 computer science teachers. This objective has led to various initiatives designed to support computer science education at the K–12 level. However, despite some progress, only 57% of US high schools offer foundational computer science courses, only 5.8% of students enrol in these courses, and just 31% of the enrolled students are female. As a result, Aman and his team have worked in close partnership with teachers to address questions that explore ways to more meaningfully integrate CT ideas and practices into formal education, such as:

  • What kinds of experiences do students need to learn computing concepts, to be confident to pursue computing?
  • What kinds of knowledge do teachers need to have to facilitate these learning experiences?
  • What kinds of experiences do teachers need to develop these kinds of knowledge? 

The CT4EDU project

At the primary education level, the CT4EDU project posed the question “What does computational thinking actually look like in elementary classrooms, especially in the context of maths and science classes?” This project involved collaboration with teachers, curriculum designers, and coaches to help them conceptualise and implement CT in their core instruction.

A child at a laptop

During professional development workshops using both plugged and unplugged tasks, the researchers supported educators to connect their day-to-day teaching practice to four foundational CT constructs:

  1. Debugging
  2. Abstraction
  3. Decomposition
  4. Patterns

An emerging aspect of the research team’s work has been the important relationship between vocabulary, belonging, and identity-building, with implications for equity. Actively incorporating CT vocabulary in lesson planning and classroom implementation helps students familiarise themselves with CT ideas: “If young people are using the language, they see themselves belonging in computing spaces”. 

A main finding from the study is that teachers used CT ideas to explicitly engage students in metacognitive thinking processes, and to help them be aware of their thinking as they solve problems. Rather than teachers using CT solely to introduce their students to computing, they used CT as a way to support their students in whatever they were learning. This constituted a fundamental shift in the research team’s thinking and future work, which is detailed further in a conceptual article

The Smithsonian Science for Computational Thinking project

The work conducted for the CT4EDU project guided the approach taken in the Smithsonian Science for Computational Thinking project. This project entailed the development of a curriculum for grades 3 and 5 that integrates CT into science lessons.

Teacher and young student at a laptop.

Part of the project included surveying teachers about the value they place on CT, both before and after participating in professional development workshops focused on CT. The researchers found that even before the workshops, teachers make connections between CT and the rest of the curriculum. After the workshops, an overwhelming majority agreed that CT has value (see image below). From this survey, it seems that CT ties things together for teachers in ways not possible or not achieved with other methods they’ve tried previously.  

A graph from Aman's seminar.

Despite teachers valuing the CT approach, asking them to integrate coding into their practices from the start remains a big ask (see image below). Many teachers lack knowledge or experience of coding, and they may not be curriculum designers, which means that we need to develop resources that allow teachers to integrate CT and coding in natural ways. Aman proposes that this requires a longitudinal approach, working with teachers over several years, using plugged and unplugged activities, and working closely with schools’ STEAM or specialist technology teachers where applicable to facilitate more computationally rich learning experiences in classrooms.

A graph from Aman's seminar.

Integrated computational thinking

Aman’s team is also engaged in a research project to integrate CT at middle school level for students aged 11 to 14. This project focuses on the question “What does CT look like in the context of social studies, English language, and art classrooms?”

For this project, the team conducted three Delphi studies, and consequently created learning pathways for each subject, which teachers can use to bring CT into their classrooms. The pathways specify practices and sub-practices to engage students with CT, and are available on the project website. The image below exemplifies the CT integration pathways developed for the arts subject, where the relationship between art and data is explored from both directions: by using CT and data to understand and create art, and using art and artistic principles to represent and communicate data. 

Computational thinking in the primary classroom

Aman’s work highlights the broad value of CT in education. However, to meaningfully integrate CT into the classroom, Aman suggests that we have to take a longitudinal view of the time and methods required to build teachers’ understanding and confidence with the fundamentals of CT, in a way that is aligned with their values and objectives. Aman argues that CT is really about thinking, and sometimes about computing, to support disciplinary learning in primary classrooms. Therefore, rather than focusing on integrating coding into the classroom, he proposes that we should instead talk about using CT practices as the building blocks that provide the foundation for incorporating computationally rich experiences in the classroom. 

Watch the recording of Aman’s presentation:

You can access Aman’s seminar slides as well.

You can find out more about connecting research to practice for primary computing education by watching the recordings of the other seminars in our series on primary (K–5) teaching and learning. In particular, Bobby Whyte discusses similar concepts to Aman in his talk on integrating primary computing and literacy through multimodal storytelling

Sign up for our seminars

Our 2024 seminar series is on the theme of teaching programming, with or without AI. In this series, we explore the latest research on how teachers can best support school-age learners to develop their programming skills.

On 13 February, we’ll hear from Majeed Kazemi (University of Toronto) about his work investigating whether AI code generator tools can support K-12 students to learn Python programming.

Sign up now to join the seminar:

The post Integrating computational thinking into primary teaching appeared first on Raspberry Pi Foundation.

❌