JupyterCon, and the Next 100 Programming Systems

December 16, 2025

JupyterCon 2025 was an amazing tour of all the exciting things happening across the Jupyter Ecosystem. Education, Health, Astronomy, Geoscience, and many more. As a computer scientist who thinks about the future of programming, this is certainly very exciting. But one of the most interesting threads of the conference was not just how the Jupyter ecosystem is currently supporting many diverse forms of computing, but also how it might serve as the platform for a future of scientific computing that might look very different from notebooks as we know them. My question is: how might the Jupyter Ecosystem support not just the future of notebooks, but the next hundred programming systems?

Building the Next Scientific Programming Interface with JupyterLab

My experience at JupyterCon convinced me that JupyterLab is one of the most exciting platforms not just for notebooks, but for computing in general. The components of the JupyterLab system can support not just current notebooks, but might also serve as a foundation for a wild diversity of computing interfaces.

Stephen Macke (Software Engineer, Databricks) shared his ongoing work on a reactive kernel for Jupyter, building a programming system that works very differently than a normal notebook, but using the same interface components that users are familiar with. Matt Fisher (Community Manager & Software Engineer, Schmidt Center for Data Science & Environment) shared progress from the GeoJupyter community on the JupyterGIS project, integrating GIS tools into a notebook environment. These systems retain a familiar notebook interface while supporting new computing challenges. 

A graphic of a circle chart with different parts of the community, next to person behind a podium speaking into a microphone.

I’m also excited about the interface opportunities which might come along with the recent work on kernel subshells which we heard about from Ian Thomas (Scientific Software Developer, QuantStack). Rather than thinking about Notebooks as a bank of code we can run in series, how might the JupyterLab notebooks interface change once we can think of notebooks as palettes of code we can send in parallel to a kernel?

My own research is another example of this sort of work: building on top of JupyterLab to support a new interaction while still being able to leverage all of the powerful tools and infrastructure in the Jupyter ecosystem.

But we could take this even further: JupyterLab as a platform doesn’t necessarily require that the user interface with cells directly — using a reactive notebook, the cells could just as easily be an underlying abstraction layer for a different kind of interface on the program. While notebooks are certainly a popular way to interact with programs, JupyterLab as a computing platform might be capable of creating programming interfaces which look nothing like notebooks but can still operate smoothly within the existing infrastructure.

JupyterHub as a Platform for a New Ecosystem of Tools

Beyond JupyterLab as a foundation for computing interfaces, I’m also excited by the amazing work happening in the JupyterHub community. JupyterHub is currently supporting amazing work in scientific computing with large data and processing needs, like the CryoCloud community we heard about from Tasha Snow (Research Scientist, University of Maryland; CryoCloud co-founder). 

I was particularly inspired by Yuvi (Co-founder and Tech Lead, 2i2c) and his great talk exploring how JupyterHub might support a variety of computing applications—hosting other interactive computing systems like R Studio, web servers straight from a Jupyter notebook, interactive data visualizations which can open directly into a QGIS desktop application which generates the user’s exact view. Now that JupyterHub can provide an easy place to co-locate interactive computing systems with large data repositories, I’m looking forward to a whole new ecosystem of interconnected computing systems. Additionally, I think JupyterHub and JupyterLab together can enable domain users to create their own programming interfaces and tools which they can immediately insert into a powerful cloud computing platform. How might we build direct-manipulation interfaces which can interoperate with a Jupyter notebook, allowing a user to manipulate and visualize the same data through multiple modalities at the same time?

A graphic of a world map, next to person behind a podium speaking into a microphone.

User Research and Open Source Communities

By day three, during the community sprints, I was left thinking about what role I (and user-researchers like me) might play in the Jupyter community. Among groups working on additions to all parts of the Jupyter ecosystem, I thought about how I might be able to best contribute. As a researcher in Human-Computer Interaction, I think a lot about how to study scientific programming as it happens and how that research might inform future programming interface design. Now more than ever, users across a diverse range of programming abilities, domains, and use-cases are using Jupyter tools to do their work. How might we best learn about their needs and design for them? 

Personally, I’m excited by the role that deep user research could play in answering this question. In my own experience, learning enough about a researcher’s work to ‘speak the language’ has often been the first step to understanding not just their programming problems, but what their programming work actually accomplishes: how they’re using a Jupyter notebook to ask a scientific question, and how we might better support users once we have a more holistic understanding of how they’re using a notebook to do that work. 

This is primarily a call for my own research field of Human-Computer Interaction to think more about how we might better contribute to open-source communities like Jupyter—doing long-form, detailed user research is difficult and time-consuming work, but it might also lead to imagining new ways of computing that we otherwise wouldn’t have arrived at.