The Consortium for Data Analytics in Risk (CDAR) welcomes Chi Zhang, BIDS Data Science Fellow Chris Kennedy, Kamyar Kaviani, Nikita Vemuri, and Simon Walter to discuss the work they conducted as part of the Chengdu 80 Hackathon. BIDS Data Science Fellow Sören Künzel was also a joint participant/developer.
Abstract: As an alternative to traditional loans, young people could issue securities that pay dividends that depend on their future financial success in life. This type of a personal IPO is especially desirable for young people, who for example may need money for a college education, because it allows them to shift the risk of repayment to investors who bet on their future success, unlike in a traditional loan setting. In this seminar we will report a framework for estimating an indicative IPO price for individuals and placing the securities with investors. We will also demo an app that is designed to make participating in personal IPOs possible both for experienced and newly starting investors.
Chris Kennedy is now a postdoctoral fellow in biomedical informatics at Harvard Medical School, focusing on deep learning and causal inference in Gabriel Brat’s surgical informatics lab. He has a PhD in biostatistics from UC Berkeley. He is a senior fellow at UC Berkeley’s D-Lab and is affiliated with the Integrative Cancer Research Group and the Division of Research at Kaiser Permanente Northern California. At BIDS, he was a BIDS - Biomedical Big Data Training (BBDT) Data Science Fellow and a PhD student in biostatistics at UC Berkeley, where he worked with Alan Hubbard. He was also a D-Lab instructor and consultant, and an NIH biomedical big data trainee. His methodological interests encompassed targeted machine learning, randomized trials, causal inference, deep learning, text analysis, signal processing, and computer vision. His applications were primarily in precision medicine, public health, genomics, and election campaigns. His software projects included the SuperLearner ensemble learning system and varImpact for variable importance estimation; he leverages high performance computing on Savio and XSEDE clusters to accelerate his work. Prior to Berkeley he worked in political analytics in DC, running dozens of randomized trials and integrating machine learning into multi-million dollar programs to improve voter turnout for underrepresented Americans. He has also worked to support climate change action through Al Gore’s Climate Reality Project and the Yale Program on Climate Change Communication. He holds an M.A. in political science from UC Berkeley, an M.P.Aff. from the LBJ School of Public Affairs, and a B.A. in government & economics from The University of Texas at Austin.
Sören R. Künzel was a BIDS Data Science Fellow and Ph.D. candidate in the Department of Statistics at UC Berkeley, jointly supervised by Peter Bickel, Jasjeet Sekhon, and Bin Yu. After studying Mathematics and Medizin at the University of Bonn, he went for one year to the Department of Statistics at Yale where he did some research on model selection criterions. At UC Berkeley, he was interested in causal inference, machine learning and experimental design, and he enjoys solving real world problems and analyzing asymptotic behavior of statistical estimators. Together with his supervisors and Allen Tang, he developed an R package to estimate heterogeneous treatment effects (https://github.com/soerenkuenzel/hte), and he is currently developing a new version of random forests which is in particularly well suited for statistical inference. In addition to this, he also released a new algorithm to optimally assign units to different treatment groups using a variation of the Knowledge Gradient criterion applied to Gaussian Process Priors.