JupyterLab: Building Blocks for Scientific Computing

PyData San Francisco 2016


August 13, 2016
10:00am to 10:45am
San Francisco, CA

Conference Website

Converence Schedule

Event Description:
PyData conferences bring together users and developers of data analysis tools to share ideas and learn from each other. The PyData community gathers to discuss how best to apply Python tools, as well as tools using R and Julia, to meet evolving challenges in data management, processing, analytics, and visualization. PyData conferences aim to be an accessible, community-driven conference, with tutorials for novices, advanced topical workshops for practitioners, and opportunities for package developers and users to meet in person.

Talk Description:
This talk provides an early view of JupyterLab, an evolution of the Jupyter Notebook that provides a modular and extensible user interface within the context of a powerful workspace.

Project Jupyter provides building blocks for interactive and exploratory computing. These building blocks make science and data science reproducible across over 40 programming language (Python, Julia, R, etc.). Central to the project is the Jupyter Notebook, a web-based interactive computing platform that allows users to author data- and code-driven narratives - computational narratives - that combine live code, equations, narrative text, visualizations, interactive dashboards and other media.  The fundamental idea of JupyterLab is to offer a user interface that supports interactive workflows that include, but go far beyond, Jupyter Notebooks. In JupyterLab, users can arrange multiple notebooks, text editors, terminals, output areas, etc. on a single page with multiple panels, tabs, splitters, and collapsible sidebars with a file browser, command palette and integrated help system. The codebase and UI of JupyterLab is based on a flexible plugin system that makes it easy to extend with new components.


Jamie Whitacre

Former Jupyter Technical Project Manager

Jamie was the technical project manager for Project Jupyter. She collaborated with Jupyter’s developers and open source community at large to define development strategy, advance feature work, and build community involvement. Jamie has more than 10 years of experience in scientific computing systems, informatics, and data analysis. Integrating research data and systems, streamlining data workflows, cleaning data, and educating users about data tools and workflows are her specialties. Jamie previously worked at the Smithsonian’s National Museum of Natural History designing and developing data pipelines in support of the Global Genome Initiative. She has experience working in academia, government, and industry positions. She earned her graduate degree in Geography from the University of Maryland and her undergraduate degree in Biology from Whitman College.