“Image is worth a thousand words,” said Fred Barnard in 1927, to advocate for more illustrations and to promote his agency’s ads. The power of pictures is undeniable, but the flood of pictorial information had become unprecedented and overwhelming. There are more than 1000 computational algorithms to extract useful information from pictures, which has resulted in increasingly complex - and redundant - domain-specific jargon to define intrinsically the same image processing task. Imagine if we could approach the intrinsic computer vision problems and process images as one community?
These are thoughts driving the data scientists behind ImageXD, which was founded by Berkeley and Washington partners from the Moore-Sloan Data Science Environments (MSDSE) to address these challenges. ImageXD is a three-day workshop designed to discuss common problems in image processing across domains, ranging from academic disciplines such as deep learning, radiology, and materials sciences to the computer vision industry for social good.
90 researchers from 19 different institutions participated in the third annual ImageXD event illustrating the interest for cross-cutting discussion in image processing. Registration to the event was open only for 17 days, with 64% of attendees registering within the first 5 days – ImageXD accommodated (free of charge) a selected group of ~50% of the registrants, representing expertise in computer vision, microscopy, materials imaging, photography, earth science, neuroscience, astronomy, software development, and more.
The common bond between these researchers coming from a variety of disciplines is that they work with images as a primary source of data. Throughout the event, we learned from one another while strengthening ties across disciplinary boundaries and began the development of collaborations that we hope will have a lasting impact on the image processing community.
The ImageXD event included three main activities:
Learn - On Day 1, Stefan van der Walt (BIDS) opened the event with technical tutorials introducing methods in python to analyze image data, from linear algebra fundamentals to 3D image segmentation. Next, Ariel Rokem (UW) delivered a hands-on course on Deep Learning using Keras on Azure, followed by Peter Chang (UCSF), who used Tensorflow on GoogleCloud instead. Finally, Valentina Stanova (UW) taught how to perform parallel processing using dask within python. The training materials are all openly available.
Discuss - In the mornings of Day 2 and Day 3, selected speakers shared their expertise individually before joint panel discussions at lunch. The morning speakers addressed topics in pattern recognition, how the technology has advanced, and what to expect in the future:
Maxim Ziatdinov — Oak Ridge National Laboratory
Deep learning for atomically resolved imaging techniques: chemical identification and tracking local transformations
James Coughlan — Smith-Kettlewell Eye Research Institute
Computer vision for the visually impaired
Amit Kapadia — Planet Labs
Building Global Mosaics
Natalie Larson — UC Santa Barbara
In-situ X-ray computed tomography for defect evolution
John Canny — UC Berkeley
Deep net visualization, interpretable driving
John Kirkham — Howard Hughes Medical Institute
Interactively analyzing larger than memory neural imaging data
Matt McCormick — Kitware, Inc.
Interactive Analysis and Visualization of Large Images in the Web Browser
Suhas Somnath — Oak Ridge National Laboratory
Pycroscopy - a python package for analyzing, storing, and visualizing multidimensional scientific imaging data
James Sethian — CAMERA, Lawrence Berkeley National Laboratory
Mathematics for image across domains
Deep Ganguli — Chan Zuckerberg Initiative
Starfish: A Python library for Image Based Transcriptomics
Create - During the afternoons of Day 2 and Day 3, participants broke out into discussion groups in one of the 10 proposed “hacks,” with themes ranging from photovoltaics to biomedical imaging considering numerical schemes such as matrix factorization, autoencoders, multidimensional spectral analysis, among others. Small teams worked on several projects, whose results were presented in the last day.
At the conclusion of the event, it was apparent that ImageXD had hit a sweet spot in rallying an interdisciplinary group that still had a common bond. Participants and speakers described the ImageXD activities as impressive in showing how to tackle seemingly different fields in similar ways, and how the speakers have approached imaging challenges. Other comments were: “I enjoyed the perspectives of the medical and physical sciences folks”, “I enjoyed having a few theory talks included as well. Provided some interesting ideas to explore”, “Think some interesting tooling for interactively viewing image data has come out from new developments” during the event.
Moving forward, the ImageXD’2018 organizers, Dani Ushizima (BIDS/LBNL), Stefan van der Walt (BIDS), Maryam Vareth (BIDS/LLNL), Dmitry Morozov (BIDS/LBNL), Maryana Alegro (BIDS/UCSF), and Elizabeth Brashers (BIDS/UCSF) are excited about things that this community hopes to accomplish. Currently, they are working to establish an online community on www.imagexd.org to facilitate discussions among image processing practitioners in different domains and different sectors, including academia, industry, non-profit, and government research labs. They hope that this combination of real-life and online forums for discussion will provide many people with a meeting place, a place to learn more from each and create new tools, ideas, and knowledge together.