Built by educators.
Built for educators.
BeyondGrader addresses core challenges facing educators teaching introductory computer science.
We understand these challenges, because we’ve encountered them in our own courses. Part of what makes BeyondGrader different is that our approach was designed and implemented by educators. We know you want to spend more time supporting students, and less time writing assessments. So do we.
BeyondGrader supports that goal in three ways:
- Write accurate questions faster
- Provide automated code quality feedback without human inspection
- Build off our existing problem library to give students lots of practice immediately
Rapidly Author Accurate Questions
Authoring programming challenges usually requires writing test suites—a tedious and time-consuming process. And even once you’re written a few test cases, it’s not clear whether they’re sufficient to identify common mistakes made by students. Other autograding tools might give you a nice web UI to write your tests. But they don’t address the core problem.
Writing problems with BeyondGrader is different. You provide a description and a solution, and we do as much of the rest as possible! Our novel system automatically generates a testing strategy, including inputs to all the methods you expect students to complete. We also validate that the generated testing strategy is accurate by automatically generating failing inputs using the solution you provided.
The end result: Authoring programming challenges with BeyondGrader is faster, more accurate, and more fun. Check out the video above for a demo.
Automated Code Quality Feedback
We know you want your students to learn how to write code that is not just correct, but also good. But manually inspecting code to assess quality is time consuming, error-prone, and results in feedback that arrives too late for students to incorporate.
BeyondGrader can automatically assess multiple aspects of code quality. We accompish this through deep integration with the small number of languages we support: Java, Kotlin, and Python (currently in alpha). For each submission, we measure multiple facets of code quality—including style and formatting, cyclomatic complexity, execution efficiency, source line counts, memory usage, and dead code detection. By using the instructor-provided solution as a baseline, we can identify submissions that include too many code paths, complete slowly, consume too much memory, and so on.
The end result: BeyondGrader can immediately provide students with actionable guidance on how to make their correct submissions better. Check out the demo for examples of each code quality check we currently support.
Existing Question Library
BeyondGrader makes authoring problems so quick and fun that we think you’ll probably want to write some yourself—problems that speak to your own students in meaningful ways. However, it takes time to build up a large question library. So you’re also welcome to integrate any of our hundreds of ready-made questions for Java and Kotlin.
(We currently have a much smaller library of Python problems. If you’re interested in Python, get in touch.)