Post-Training Evaluation with Binder

Jessica Forde, Chris Holdgraf, Yuvi Panda, Aaron Culich, Matthias Bussonnier, Min, Ragan-Kelley,Carol Willing, Tim Head,Fernando Perez, Brian Granger

semanticscholar

引用 0|浏览0
暂无评分
摘要
Black box’ models are increasingly prevalent in our world and have important societal impacts, but are often difficult to scrutinize or evaluate for bias. Binder provides anyone in the community the opportunity to examine a machine learning pipeline, promoting fairness, accountability, and transparency. Binder is used to create custom computing environments that can be shared and used by many remote users, enabling the user to build and register a Docker image from a repository and connect with JupyterHub. Users can select a specific branch name, commit, or tag to serve. Binder combines two projects: JupyterHub, which provides a scalable system for authenticating users and launching Jupyter Notebook servers, and repo2docker, which generates a Docker image from a Git repository. When connected with JupyterLab, users can navigate a repository on Binder with an IDE as if they were developing the project locally and can explore all underlying data (CSV, JSON, image, etc.). JupyterHub, repo2docker, and JupyterLab work together on Binder to allow a user to evaluate a machine learning pipeline with much greater transparency than a typical publication or GitHub page. Together, these three projects promote fairness, accountability, and transparency in machine learning.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要