Location: Room 101
Time: 2:00PM, Oct 2nd 2017

Schedule of Events

Call for Papers

Recently, deep neural networks have been achieving breakthroughs in various major artificial intelligence tasks such as machine translation, image understanding, speech recognition, and so on. In these tasks, deep neural networks reached the level of an accuracy comparable to or even better than humans’ performance. However, understanding the deep neural networks is challenging due to their complicated inner workings.

VADL, the workshop on visual analytics for deep learning, is a half-day workshop held in conjunction with IEEE VIS 2017 in Phoenix, AZ. The primary goal of the workshop is to bridge the gap by bringing together researchers from both machine learning and visual analytics fields, which allows us to push the boundary of deep learning. The workshop should provide an opportunity to discuss and explore ways to harmonize the power of automated techniques and exploratory nature of interactive visualization.

Topics of Interest

In this call, we look for papers related to visual analytics for deep learning. Topics of interest include, but are not limited to:

  • User-adaptive approaches to deep neural networks
  • Coupling of deep learning-based systems with interactive visualization
  • Visualization to improve deep learning algorithms and models
  • Coordinated visualizations for deep learning
  • Systems, languages, and architectures for interactive deep learning
  • Collaborative analysis of deep network networks
  • Real-time visualization of streaming data handling in deep learning
  • Visualization of uncertainty in deep neural networks
  • Domain-specific applications of visual analysis of deep neural networks (e.g. retail, healthcare, government, etc.)
  • Studies and evaluation of deep learning visualization techniques, systems, metrics, and benchmarks

Submission Information

All papers will be peer reviewed, single-blinded. We welcome many kinds of papers, including (but not limited to):

  • Novel research papers
  • Demo papers
  • Work-in-progress papers
  • Visionary papers (white papers)
  • Position papers
  • Relevant work that has been previously published
  • Work that will be presented at the main conference of IEEE VIS’17

Authors should clearly indicate in their abstracts the kinds of submissions, to help reviewers better understand their contributions. Submissions must be in PDF, written in English, no more than 10 pages long — shorter papers such as 2 to 4 pages are welcome — and formatted using the IEEE “Conference Style” template. Papers should be submiitted through the Precision Conference System in the VIS Workshops 2017. The accepted papers will be posted on the workshop website and will be included in the IEEE VIS’17 electronic proceedings USB stick. For accepted papers, at least one author must attend the workshop to present the work.

Important Dates

  • Submission: July 31, 2017 (Extended)
  • Author notification: August 15, 2017
  • Camera-ready: August 25, 2015
  • Workshop date: October 1 or 2

Organizers

Jaegul Choo

Jaegul Choo is currently an assistant professor in the Dept. of Computer Science and Engineering at Korea University. He earned his Ph.D at Georgia Tech in 2013, where he continued working as a research scientist until 2015. His main research lies in leveraging both machine learning and interactive visualization for machine learning and data mining. He has developed interpretable, interactive machine learning approaches as well as sophisticated visual analytics systems utilizing them for supporting complicated human-in-the-loop analysis tasks. Jaegul has published in premier venues in data mining, machine learning, and visual analytics such as TVCG, CG&A, VAST, KDD, WWW, WSDM, ICDM, ICWSM, SDM, AAAI, and IJCAI. He received the Best Student Paper Award at ICDM in 2016 and the Best Poster Award at IEEE VAST (as part of IEEE VIS) in 2014.

Jason Yosinski

Jason Yosinski is a machine learning researcher and founding member of Uber AI Labs, where he uses neural networks and machine learning to build better and more understandable AI. He was previously a PhD student and NASA Space Technology Research Fellow working at the Cornell Creative Machines Lab, the University of Montreal, the Caltech Jet Propulsion Laboratory, and Google DeepMind. His work on AI has been featured on NPR, Fast Company, the Economist, TEDx, and on the BBC. When not doing research, Mr. Yosinski enjoys tricking middle school students into learning math while they play with robots.

Deokgun Park

Deokgun Park is a doctoral candidate advised by Niklas Elmqvist in the Department of Computer Science at University of Maryland, College Park. He is interested in supporting humans for solving open-ended tasks in text mining using visual analytics. He approaches the problem of evaluating the unstructured output of deep learning models as an open-ended task, where humans can point the weak area of the model and teach the model to improve it. Deokgun has published more than 10 academic papers and registered more than 20 patents. Deokgun received an honorable mention award at ACM CHI 2016 and Best Poster Award at LabAutomation 2002.

Shixia Liu

Shixia Liu is an associate professor in the School of Software, Tsinghua University. Her research interests include visual text analytics, visual social analytics, visual model analytics, and text mining. She has published over 40 refereed papers in the leading journals and conferences such as IEEE Transactions on Visualization and Computer Graphics, IEEE Transactions on Knowledge and Data Engineering, IEEE InfoVis, IEEE VAST, KDD, and WWW. Shixia is the associate editor of IEEE Transactions on Visualization and Computer Graphics and on the editorial board of Information Visualization. She is the papers co-chair of IEEE VAST 2016 and 2017 and was the program co-chair of PacifcVis 2014 and VINCI 2012. She was the guest editor of ACM Transactions on Intelligent Systems and Technology and Tsinghua Science and Technology. Shixia received honorable mention awards at ACM CIKM 2009 and IEEE VAST 2014.

Program Committee