FreshRSS

Normální zobrazení

Jsou dostupné nové články, klikněte pro obnovení stránky.
PředevčíremHlavní kanál
  • ✇Raspberry Pi Foundation
  • Imagining students’ progression in the era of generative AISarah Millar
    Generative artificial intelligence (AI) tools are becoming more easily accessible to learners and educators, and increasingly better at generating code solutions to programming tasks, code explanations, computing lesson plans, and other learning resources. This raises many questions for educators in terms of what and how we teach students about computing and AI, and AI’s impact on assessment, plagiarism, and learning objectives. We were honoured to have Professor Brett Becker (University C
     

Imagining students’ progression in the era of generative AI

7. Červen 2024 v 14:14

Generative artificial intelligence (AI) tools are becoming more easily accessible to learners and educators, and increasingly better at generating code solutions to programming tasks, code explanations, computing lesson plans, and other learning resources. This raises many questions for educators in terms of what and how we teach students about computing and AI, and AI’s impact on assessment, plagiarism, and learning objectives.

Brett Becker.

We were honoured to have Professor Brett Becker (University College Dublin) join us as part of our ‘Teaching programming (with or without AI)’ seminar series. He is uniquely placed to comment on teaching computing using AI tools, having been involved in many initiatives relevant to computing education at different levels, in Ireland and beyond.

In a computing classroom, two girls concentrate on their programming task.

Brett’s talk focused on what educators and education systems need to do to prepare all students — not just those studying Computing — so that they are equipped with sufficient knowledge about AI to make their way from primary school to secondary and beyond, whether it be university, technical qualifications, or work.

How do AI tools currently perform?

Brett began his talk by illustrating the increase in performance of large language models (LLMs) in solving first-year undergraduate programming exercises: he compared the findings from two recent studies he was involved in as part of an ITiCSE Working Group. In the first study — from 2021 — the results generated by GPT-3 were similar to those of students in the top quartile. By the second study in 2023, GPT-4’s performance matched that of a top student (Figure 1).

A graph comparing exam scores.

Figure 1: Student scores on Exam 1 and Exam 2, represented by circles. GPT-3’s 2021 score is represented by the blue ‘x’, and GPT-4’s 2023 score on the same questions is represented by the red ‘x’.

Brett also explained that the study found some models were capable of solving current undergraduate programming assessments almost error-free, and could solve the Irish Leaving Certificate and UK A level Computer Science exams.

What are challenges and opportunities for education?

This level of performance raises many questions for computing educators about what is taught and how to assess students’ learning. To address this, Brett referred to his 2023 paper, which included findings from a literature review and a survey on students’ and instructors’ attitudes towards using LLMs in computing education. This analysis has helped him identify several opportunities as well as the ethical challenges education systems face regarding generative AI. 

The opportunities include: 

  • The generation of unique content, lesson plans, programming tasks, or feedback to help educators with workload and productivity
  • More accessible content and tools generated by AI apps to make Computing more broadly accessible to more students
  • More engaging and meaningful student learning experiences, including using generative AI to enable creativity and using conversational agents to augment students’ learning
  • The impact on assessment practices, both in terms of automating the marking of current assessments as well as reconsidering what is assessed and how

Some of the challenges include:

  • The lack of reliability and accuracy of outputs from generative AI tools
  • The need to educate everyone about AI to create a baseline level of understanding
  • The legal and ethical implications of using AI in computing education and beyond
  • How to deal with questionable or even intentionally harmful uses of AI and mitigating the consequences of such uses

Programming as a basic skill for all subjects

Next, Brett talked about concrete actions that he thinks we need to take in response to these opportunities and challenges. 

He emphasised our responsibility to keep students safe. One way to do this is to empower all students with a baseline level of knowledge about AI, at an age-appropriate level, to enable them to keep themselves safe. 

Secondary school age learners in a computing classroom.

He also discussed the increased relevance of programming to all subjects, not only Computing, in a similar way to how reading and mathematics transcend the boundaries of their subjects, and the need he sees to adapt subjects and curricula to that effect. 

As an example of how rapidly curricula may need to change with increasing AI use by students, Brett looked at the Irish Computer science specification for “senior cycle” (final two years of second-level, ages 16–18). This curriculum was developed in 2018 and remains a strong computing curriculum in Brett’s opinion. However, he pointed out that it only contains a single learning outcome on AI. 

To help educators bridge this gap, in the book Brett wrote alongside Keith Quille to accompany the curriculum, they included two chapters dedicated to AI, machine learning, and ethics and computing. Brett believes these types of additional resources may be instrumental for teaching and learning about AI as resources are more adaptable and easier to update than curricula. 

Generative AI in computing education

Taking the opportunity to use generative AI to reimagine new types of programming problems, Brett and colleagues have developed Promptly, a tool that allows students to practise prompting AI code generators. This tool provides a combined approach to learning about generative AI while learning programming with an AI tool. 

Promptly is intended to help students learn how to write effective prompts. It encourages students to specify and decompose the programming problem they want to solve, read the code generated, compare it with test cases to discern why it is failing (if it is), and then update their prompt accordingly (Figure 2). 

An example of the Promptly interface.

Figure 2: Example of a student’s use of Promptly.

Early undergraduate student feedback points to Promptly being a useful way to teach programming concepts and encourage metacognitive programming skills. The tool is further described in a paper, and whilst the initial evaluation was aimed at undergraduate students, Brett positioned it as a secondary school–level tool as well. 

Brett hopes that by using generative AI tools like this, it will be possible to better equip a larger and more diverse pool of students to engage with computing.

Re-examining the concept of programming

Brett concluded his seminar by broadening the relevance of programming to all learners, while challenging us to expand our perspectives of what programming is. If we define programming as a way of prompting a machine to get an output, LLMs allow all of us to do so without the need for learning the syntax of traditional programming languages. Taking that view, Brett left us with a question to consider: “How do we prepare for this from an educational perspective?”

You can watch Brett’s presentation here:

Join our next seminar

The focus of our ongoing seminar series is on teaching programming with or without AI. 

For our next seminar on Tuesday 11 June at 17:00 to 18:30 GMT, we’re joined by Veronica Cucuiat (Raspberry Pi Foundation), who will talk about whether LLMs could be employed to help understand programming error messages, which can present a significant obstacle to anyone new to coding, especially young people.  

To take part in the seminar, click the button below to sign up, and we will send you information about how to join. We hope to see you there.

The schedule of our upcoming seminars is online. You can catch up on past seminars on our blog and on the previous seminars and recordings page.

The post Imagining students’ progression in the era of generative AI appeared first on Raspberry Pi Foundation.

  • ✇Raspberry Pi Foundation
  • Introducing classroom management to the Code EditorPhil Howell
    I’m excited to announce that we’re developing a new set of Code Editor features to help school teachers run text-based coding lessons with their students. New Code Editor features for teaching Last year we released our free Code Editor and made it available as an open source project. Right now we’re developing a new set of features to help schools use the Editor to run text-based coding lessons online and in-person. The new features will enable educators to create coding activities i
     

Introducing classroom management to the Code Editor

16. Květen 2024 v 11:07

I’m excited to announce that we’re developing a new set of Code Editor features to help school teachers run text-based coding lessons with their students.

Secondary school age learners in a computing classroom.

New Code Editor features for teaching

Last year we released our free Code Editor and made it available as an open source project. Right now we’re developing a new set of features to help schools use the Editor to run text-based coding lessons online and in-person.

The new features will enable educators to create coding activities in the Code Editor, share them with their students, and leave feedback directly on each student’s work. In a simple and easy-to-use interface, educators will be able to give students access, group them into classes within a school account, and quickly help with resetting forgotten passwords.

Example Code Editor feedback screen from an early prototype

We’re adding these teaching features to the Code Editor because one of the key problems we’ve seen educators face over the last few months has been the lack of an ideal tool to teach text-based coding in the classroom. There are some options available, but they can be cost-prohibitive for schools and educators. Our mission is to support young people to realise their full potential through the power of computing, and we believe that to tackle educational disadvantage, we need to offer high-quality tools and make them as accessible as possible. This is why we’ll offer the Code Editor and all its features to educators and students for free, forever.

A learner and educator at a laptop.

Alongside the new classroom management features, we’re also working on improved Python library support for the Code Editor, so that you and your students can get more creative and use the Editor for more advanced topics. We continue to support HTML, CSS, and JavaScript in the Editor too, so you can set website development tasks in the classroom.

Two learners at a laptop in a computing classroom.

Educators have already been incredibly generous in their time and feedback to help us design these new Code Editor features, and they’ve told us they’re excited to see the upcoming developments. Pete Dring, Head of Computing at Fulford School, participated in our user research and said on LinkedIn: “The class management and feedback features they’re working on at the moment look really promising.” Lee Willis, Head of ICT and Computing at Newcastle High School for Girls, also commented on the Code Editor: “We have used it and love it, the fact that it is both for HTML/CSS and then Python is great as the students have a one-stop shop for IDEs.”

Our commitment to you

  • Free forever: We will always provide the Code Editor and all of its features to educators and students for free.
  • A safe environment: Accounts for education are designed to be safe for students aged 9 and up, with safeguarding front and centre.
  • Privacy first: Student data collection is minimised and all collected data is handled with the utmost care, in compliance with GDPR and the ICO Children’s Code.
  • Best-practice pedagogy: We’ll always build with education and learning in mind, backed by our leading computing education research.
  • Community-led: We value and seek out feedback from the computing education community so that we can continue working to make the Code Editor even better for teachers and students.

Get started

We’re working to have the Code Editor’s new teaching features ready later this year. We’ll launch the setup journey sooner, so that you can pre-register for your school account as we continue to work on these features.

Before then, you can complete this short form to keep up to date with progress on these new features or to get involved in user testing.

A female computing educator with three female students at laptops in a classroom.

The Code Editor is already being used by thousands of people each month. If you’d like to try it, you can get started writing code right in your browser today, with zero setup.

The post Introducing classroom management to the Code Editor appeared first on Raspberry Pi Foundation.

  • ✇Raspberry Pi Foundation
  • Insights into students’ attitudes to using AI tools in programming educationKatharine Childs
    Educators around the world are grappling with the problem of whether to use artificial intelligence (AI) tools in the classroom. As more and more teachers start exploring the ways to use these tools for teaching and learning computing, there is an urgent need to understand the impact of their use to make sure they do not exacerbate the digital divide and leave some students behind. Sri Yash Tadimalla from the University of North Carolina and Dr Mary Lou Maher, Director of Research Communit
     

Insights into students’ attitudes to using AI tools in programming education

8. Duben 2024 v 10:47

Educators around the world are grappling with the problem of whether to use artificial intelligence (AI) tools in the classroom. As more and more teachers start exploring the ways to use these tools for teaching and learning computing, there is an urgent need to understand the impact of their use to make sure they do not exacerbate the digital divide and leave some students behind.

A teenager learning computer science.

Sri Yash Tadimalla from the University of North Carolina and Dr Mary Lou Maher, Director of Research Community Initiatives at the Computing Research Association, are exploring how student identities affect their interaction with AI tools and their perceptions of the use of AI tools. They presented findings from two of their research projects in our March seminar.

How students interact with AI tools 

A common approach in research is to begin with a preliminary study involving a small group of participants in order to test a hypothesis, ways of collecting data from participants, and an intervention. Yash explained that this was the approach they took with a group of 25 undergraduate students on an introductory Java programming course. The research observed the students as they performed a set of programming tasks using an AI chatbot tool (ChatGPT) or an AI code generator tool (GitHub Copilot). 

The data analysis uncovered five emergent attitudes of students using AI tools to complete programming tasks: 

  • Highly confident students rely heavily on AI tools and are confident about the quality of the code generated by the tool without verifying it
  • Cautious students are careful in their use of AI tools and verify the accuracy of the code produced
  • Curious students are interested in exploring the capabilities of the AI tool and are likely to experiment with different prompts 
  • Frustrated students struggle with using the AI tool to complete the task and are likely to give up 
  • Innovative students use the AI tool in creative ways, for example to generate code for other programming tasks

Whether these attitudes are common for other and larger groups of students requires more research. However, these preliminary groupings may be useful for educators who want to understand their students and how to support them with targeted instructional techniques. For example, highly confident students may need encouragement to check the accuracy of AI-generated code, while frustrated students may need assistance to use the AI tools to complete programming tasks.

An intersectional approach to investigating student attitudes

Yash and Mary Lou explained that their next research study took an intersectional approach to student identity. Intersectionality is a way of exploring identity using more than one defining characteristic, such as ethnicity and gender, or education and class. Intersectional approaches acknowledge that a person’s experiences are shaped by the combination of their identity characteristics, which can sometimes confer multiple privileges or lead to multiple disadvantages.

A student in a computing classroom.

In the second research study, 50 undergraduate students participated in programming tasks and their approaches and attitudes were observed. The gathered data was analysed using intersectional groupings, such as:

  • Students who were from the first generation in their family to attend university and female
  • Students who were from an underrepresented ethnic group and female 

Although the researchers observed differences amongst the groups of students, there was not enough data to determine whether these differences were statistically significant.

Who thinks using AI tools should be considered cheating? 

Participating students were also asked about their views on using AI tools, such as “Did having AI help you in the process of programming?” and “Does your experience with using this AI tool motivate you to continue learning more about programming?”

The same intersectional approach was taken towards analysing students’ answers. One surprising finding stood out: when asked whether using AI tools to help with programming tasks should be considered cheating, students from more privileged backgrounds agreed that this was true, whilst students with less privilege disagreed and said it was not cheating.

This finding is only with a very small group of students at a single university, but Yash and Mary Lou called for other researchers to replicate this study with other groups of students to investigate further. 

You can watch the full seminar here:

Acknowledging differences to prevent deepening divides

As researchers and educators, we often hear that we should educate students about the importance of making AI ethical, fair, and accessible to everyone. However, simply hearing this message isn’t the same as truly believing it. If students’ identities influence how they view the use of AI tools, it could affect how they engage with these tools for learning. Without recognising these differences, we risk continuing to create wider and deeper digital divides. 

Join our next seminar

The focus of our ongoing seminar series is on teaching programming with or without AI

For our next seminar on Tuesday 16 April at 17:00 to 18:30 GMT, we’re joined by Brett A. Becker (University College Dublin), who will talk about how generative AI can be used effectively in secondary school programming education and how it can be leveraged so that students can be best prepared for continuing their education or beginning their careers. To take part in the seminar, click the button below to sign up, and we will send you information about how to join. We hope to see you there.

The schedule of our upcoming seminars is online. You can catch up on past seminars on our blog and on the previous seminars and recordings page.

The post Insights into students’ attitudes to using AI tools in programming education appeared first on Raspberry Pi Foundation.

  • ✇Raspberry Pi Foundation
  • Using an AI code generator with school-age beginner programmersBobby Whyte
    AI models for general-purpose programming, such as OpenAI Codex, which powers the AI pair programming tool GitHub Copilot, have the potential to significantly impact how we teach and learn programming.  The basis of these tools is a ‘natural language to code’ approach, also called natural language programming. This allows users to generate code using a simple text-based prompt, such as “Write a simple Python script for a number guessing game”. Programming-specific AI models are trained on
     

Using an AI code generator with school-age beginner programmers

25. Březen 2024 v 15:25

AI models for general-purpose programming, such as OpenAI Codex, which powers the AI pair programming tool GitHub Copilot, have the potential to significantly impact how we teach and learn programming. 

Learner in a computing classroom.

The basis of these tools is a ‘natural language to code’ approach, also called natural language programming. This allows users to generate code using a simple text-based prompt, such as “Write a simple Python script for a number guessing game”. Programming-specific AI models are trained on vast quantities of text data, including GitHub repositories, to enable users to quickly solve coding problems using natural language. 

As a computing educator, you might ask what the potential is for using these tools in your classroom. In our latest research seminar, Majeed Kazemitabaar (University of Toronto) shared his work in developing AI-assisted coding tools to support students during Python programming tasks.

Evaluating the benefits of natural language programming

Majeed argued that natural language programming can enable students to focus on the problem-solving aspects of computing, and support them in fixing and debugging their code. However, he cautioned that students might become overdependent on the use of ‘AI assistants’ and that they might not understand what code is being outputted. Nonetheless, Majeed and colleagues were interested in exploring the impact of these code generators on students who are starting to learn programming.

Using AI code generators to support novice programmers

In one study, the team Majeed works in investigated whether students’ task and learning performance was affected by an AI code generator. They split 69 students (aged 10–17) into two groups: one group used a code generator in an environment, Coding Steps, that enabled log data to be captured, and the other group did not use the code generator.

A group of male students at the Coding Academy in Telangana.

Learners who used the code generator completed significantly more authoring tasks — where students manually write all of the code — and spent less time completing them, as well as generating significantly more correct solutions. In multiple choice questions and modifying tasks — where students were asked to modify a working program — students performed similarly whether they had access to the code generator or not. 

A test was administered a week later to check the groups’ performance, and both groups did similarly well. However, the ‘code generator’ group made significantly more errors in authoring tasks where no starter code was given. 

Majeed’s team concluded that using the code generator significantly increased the completion rate of tasks and student performance (i.e. correctness) when authoring code, and that using code generators did not lead to decreased performance when manually modifying code. 

Finally, students in the code generator group reported feeling less stressed and more eager to continue programming at the end of the study.

Student perceptions when (not) using AI code generators

Understanding how novices use AI code generators

In a related study, Majeed and his colleagues investigated how novice programmers used the code generator and whether this usage impacted their learning. Working with data from 33 learners (aged 11–17), they analysed 45 tasks completed by students to understand:

  1. The context in which the code generator was used
  2. What learners asked for
  3. How prompts were written
  4. The nature of the outputted code
  5. How learners used the outputted code 

Their analysis found that students used the code generator for the majority of task attempts (74% of cases) with far fewer tasks attempted without the code generator (26%). Of the task attempts made using the code generator, 61% involved a single prompt while only 8% involved decomposition of the task into multiple prompts for the code generator to solve subgoals; 25% used a hybrid approach — that is, some subgoal solutions being AI-generated and others manually written.

In a comparison of students against their post-test evaluation scores, there were positive though not statistically significant trends for students who used a hybrid approach (see the image below). Conversely, negative though not statistically significant trends were found for students who used a single prompt approach.

A positive correlation between hybrid programming and post-test scores

Though not statistically significant, these results suggest that the students who actively engaged with tasks — i.e. generating some subgoal solutions, manually writing others, and debugging their own written code — performed better in coding tasks.

Majeed concluded that while the data showed evidence of self-regulation, such as students writing code manually or adding to AI-generated code, students frequently used the output from single prompts in their solutions, indicating an over-reliance on the output of AI code generators.

He suggested that teachers should support novice programmers to write better quality prompts to produce better code.  

If you want to learn more, you can watch Majeed’s seminar:

You can read more about Majeed’s work on his personal website. You can also download and use the code generator Coding Steps yourself.

Join our next seminar

The focus of our ongoing seminar series is on teaching programming with or without AI. 

For our next seminar on Tuesday 16 April at 17:00–18:30 GMT, we’re joined by Brett Becker (University College Dublin), who will discuss how generative AI may be effectively utilised in secondary school programming education and how it can be leveraged so that students can be best prepared for whatever lies ahead. To take part in the seminar, click the button below to sign up, and we will send you information about joining. We hope to see you there.

The schedule of our upcoming seminars is online. You can catch up on past seminars on our previous seminars and recordings page.

The post Using an AI code generator with school-age beginner programmers appeared first on Raspberry Pi Foundation.

  • ✇Raspberry Pi Foundation
  • Supporting learners with programming tasks through AI-generated Parson’s ProblemsVeronica Cucuiat
    The use of generative AI tools (e.g. ChatGPT) in education is now common among young people (see data from the UK’s Ofcom regulator). As a computing educator or researcher, you might wonder what impact generative AI tools will have on how young people learn programming. In our latest research seminar, Barbara Ericson and Xinying Hou (University of Michigan) shared insights into this topic. They presented recent studies with university student participants on using generative AI tools based on la
     

Supporting learners with programming tasks through AI-generated Parson’s Problems

15. Únor 2024 v 12:55

The use of generative AI tools (e.g. ChatGPT) in education is now common among young people (see data from the UK’s Ofcom regulator). As a computing educator or researcher, you might wonder what impact generative AI tools will have on how young people learn programming. In our latest research seminar, Barbara Ericson and Xinying Hou (University of Michigan) shared insights into this topic. They presented recent studies with university student participants on using generative AI tools based on large language models (LLMs) during programming tasks. 

A girl in a university computing classroom.

Using Parson’s Problems to scaffold student code-writing tasks

Barbara and Xinying started their seminar with an overview of their earlier research into using Parson’s Problems to scaffold university students as they learn to program. Parson’s Problems (PPs) are a type of code completion problem where learners are given all the correct code to solve the coding task, but the individual lines are broken up into blocks and shown in the wrong order (Parsons and Haden, 2006). Distractor blocks, which are incorrect versions of some or all of the lines of code (i.e. versions with syntax or semantic errors), can also be included. This means to solve a PP, learners need to select the correct blocks as well as place them in the correct order.

A presentation slide defining Parson's Problems.

In one study, the research team asked whether PPs could support university students who are struggling to complete write-code tasks. In the tasks, the 11 study participants had the option to generate a PP when they encountered a challenge trying to write code from scratch, in order to help them arrive at the complete code solution. The PPs acted as scaffolding for participants who got stuck trying to write code. Solutions used in the generated PPs were derived from past student solutions collected during previous university courses. The study had promising results: participants said the PPs were helpful in completing the write-code problems, and 6 participants stated that the PPs lowered the difficulty of the problem and speeded up the problem-solving process, reducing their debugging time. Additionally, participants said that the PPs prompted them to think more deeply.

A young person codes at a Raspberry Pi computer.

This study provided further evidence that PPs can be useful in supporting students and keeping them engaged when writing code. However, some participants still had difficulty arriving at the correct code solution, even when prompted with a PP as support. The research team thinks that a possible reason for this could be that only one solution was given to the PP, the same one for all participants. Therefore, participants with a different approach in mind would likely have experienced a higher cognitive demand and would not have found that particular PP useful.

An example of a coding interface presenting adaptive Parson's Problems.

Supporting students with varying self-efficacy using PPs

To understand the impact of using PPs with different learners, the team then undertook a follow-up study asking whether PPs could specifically support students with lower computer science self-efficacy. The results show that study participants with low self-efficacy who were scaffolded with PPs support showed significantly higher practice performance and higher problem-solving efficiency compared to participants who had no scaffolding. These findings provide evidence that PPs can create a more supportive environment, particularly for students who have lower self-efficacy or difficulty solving code writing problems. Another finding was that participants with low self-efficacy were more likely to completely solve the PPs, whereas participants with higher self-efficacy only scanned or partly solved the PPs, indicating that scaffolding in the form of PPs may be redundant for some students.

Secondary school age learners in a computing classroom.

These two studies highlighted instances where PPs are more or less relevant depending on a student’s level of expertise or self-efficacy. In addition, the best PP to solve may differ from one student to another, and so having the same PP for all students to solve may be a limitation. This prompted the team to conduct their most recent study to ask how large language models (LLMs) can be leveraged to support students in code-writing practice without hindering their learning.

Generating personalised PPs using AI tools

This recent third study focused on the development of CodeTailor, a tool that uses LLMs to generate and evaluate code solutions before generating personalised PPs to scaffold students writing code. Students are encouraged to engage actively with solving problems as, unlike other AI-assisted coding tools that merely output a correct code correct solution, students must actively construct solutions using personalised PPs. The researchers were interested in whether CodeTailor could better support students to actively engage in code-writing.

An example of the CodeTailor interface presenting adaptive Parson's Problems.

In a study with 18 undergraduate students, they found that CodeTailor could generate correct solutions based on students’ incorrect code. The CodeTailor-generated solutions were more closely aligned with students’ incorrect code than common previous student solutions were. The researchers also found that most participants (88%) preferred CodeTailor to other AI-assisted coding tools when engaging with code-writing tasks. As the correct solution in CodeTailor is generated based on individual students’ existing strategy, this boosted students’ confidence in their current ideas and progress during their practice. However, some students still reported challenges around solution comprehension, potentially due to CodeTailor not providing sufficient explanation for the details in the individual code blocks of the solution to the PP. The researchers argue that text explanations could help students fully understand a program’s components, objectives, and structure. 

In future studies, the team is keen to evaluate a design of CodeTailor that generates multiple levels of natural language explanations, i.e. provides personalised explanations accompanying the PPs. They also aim to investigate the use of LLM-based AI tools to generate a self-reflection question structure that students can fill in to extend their reasoning about the solution to the PP.

Barbara and Xinying’s seminar is available to watch here: 

Find examples of PPs embedded in free interactive ebooks that Barbara and her team have developed over the years, including CSAwesome and Python for Everybody. You can also read more about the CodeTailor platform in Barbara and Xinying’s paper.

Join our next seminar

The focus of our ongoing seminar series is on teaching programming with or without AI. 

For our next seminar on Tuesday 12 March at 17:00–18:30 GMT, we’re joined by Yash Tadimalla and Prof. Mary Lou Maher (University of North Carolina at Charlotte). The two of them will share further insights into the impact of AI tools on the student experience in programming courses. To take part in the seminar, click the button below to sign up, and we will send you information about joining. We hope to see you there.

The schedule of our upcoming seminars is online. You can catch up on past seminars on our previous seminars and recordings page.

The post Supporting learners with programming tasks through AI-generated Parson’s Problems appeared first on Raspberry Pi Foundation.

❌
❌