-
Notifications
You must be signed in to change notification settings - Fork 1
/
Copy path2021.html
500 lines (438 loc) · 22.5 KB
/
2021.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>4th VISxAI Workshop at IEEE VIS 2021</title>
<link rel="stylesheet" href="node_modules/bootswatch/dist/sandstone/bootstrap.css">
<link rel="stylesheet" href="styles.css">
<link href="https://fonts.googleapis.com/css?family=IBM+Plex+Mono|IBM+Plex+Sans" rel="stylesheet">
<link rel="icon" type="image/png" sizes="32x32" href="favicon-32x32.png">
<link rel="icon" type="image/png" sizes="96x96" href="favicon-96x96.png">
<link rel="icon" type="image/png" sizes="16x16" href="favicon-16x16.png">
<script src="https://code.jquery.com/jquery-3.3.1.slim.min.js"
integrity="sha256-3edrmyuQ0w65f8gfBsqowzjJe2iM6n0nKciPUp8y+7E=" crossorigin="anonymous"></script>
<script src="node_modules/bootstrap/js/dist/util.js"></script>
<script src="node_modules/bootstrap/js/dist/collapse.js"></script>
<!-- Share card -->
<meta name="twitter:card" content="summary" />
<meta name="twitter:site" content="@visxai" />
<meta name="twitter:creator" content="@visxai" />
<meta property="og:url" content="http://visxai.io" />
<meta property="og:title" content="Workshop on Visualization for AI Explainability" />
<meta property="og:description"
content="The role of visualization in artificial intelligence (AI) gained significant attention in recent years. With the growing complexity of AI models, the critical need for understanding their inner-workings has increased. Visualization is potentially a powerful technique to fill such a critical need. The goal of this workshop is to initiate a call for 'explainables' / 'explorables' that explain how AI techniques work using visualization. We believe the VIS community can leverage their expertise in creating visual narratives to bring new insight into the often obfuscated complexity of AI systems."/>
<meta property="og:image" content="http://visxai.github.io/img/share.png" />
</head>
<body>
<div id="banner">
VISxAI is back! Join us at <a href="http://visxai.io">VISxAI 2022 at IEEE VIS</a> in Oklahoma City, Oklahoma!
</div>
<nav class="navbar navbar-expand-lg navbar-light bg-light">
<a class="navbar-brand" href="/"><span class="vxa">VISxAI</span></a>
<button class="navbar-toggler" type="button" data-toggle="collapse" data-target="#navbarColor03"
aria-controls="navbarColor03" aria-expanded="false" aria-label="Toggle navigation">
<span class="navbar-toggler-icon"></span>
</button>
<div class="collapse navbar-collapse" id="navbarColor03">
<ul class="navbar-nav mr-auto">
<li class="nav-item">
<a class="nav-link" href="submit.html">SUBMIT</a>
</li>
<!-- <li class="nav-item">
<a class="nav-link" href="#program">Program</a>
</li> -->
<li class="nav-item">
<a class="nav-link" href="#dates">Dates</a>
</li>
<li class="nav-item">
<a class="nav-link" href="#call">CFP</a>
</li>
<li class="nav-item">
<a class="nav-link" href="#hall-of-fame">Hall of Fame</a>
</li>
<li class="nav-item">
<a class="nav-link" href="#orga">Organizers</a>
</li>
<li class="nav-item">
<a class="nav-link" href="#pc">PC</a>
</li>
<li class="nav-item">
<a class="nav-link" href="2020.html">2020</a>
</li>
<li class="nav-item">
<a class="nav-link" href="2019.html">2019</a>
</li>
<li class="nav-item">
<a class="nav-link" href="2018.html">2018</a>
</li>
</ul>
</div>
</nav>
<div class="container" style="margin-top: 12pt;">
<div class="float-right">
<!--<img src="img/logo.png" height="70"/>-->
<img src="img/logo_v2.png" height="70" />
</div>
<!--<div style="position: absolute; right:5px;">-->
<!--</div>-->
<h2>4<sup>th</sup> Workshop on <br> <b>Visualization for AI Explainability</b></h2>
<p>October 25th, 2021 at IEEE VIS <s>in New Orleans, Louisiana</s> Online</p>
<!-- <p class="text-center" style="font-size: 14pt;">
<b>PROGRAM IS ONLINE. <a href="program.html"> CLICK HERE !!!</a> </b>
</p> -->
<p>
The role of visualization in artificial intelligence (AI) gained
significant attention in recent years. With the growing complexity of AI
models, the critical need for understanding their inner-workings has
increased. Visualization is potentially a powerful technique to fill
such a critical need.
</p>
<p>
The goal of this workshop is to initiate a call for <i>"explainables" / "explorables"</i> that
explain how AI techniques work using visualization. We believe the VIS
community can leverage their expertise in creating visual narratives to
bring new insight into the often obfuscated complexity of AI systems.
</p>
<!--<p class="text-center" style="font-size: 14pt;">-->
<!--<b>PROGRAM IS ONLINE. <a href="program.html"> CLICK HERE !!!</a> </b>-->
<!--</p>-->
<p class="text-center">
<img class="img-fluid" src="img/examples-2021.png">
<div class="figure-caption">Example interactive visualization articles that explain general concepts and communicate experimental insights when playing with AI models.
(a) <a href="https://distill.pub/2019/visual-exploration-gaussian-processes/" target="_blank">A Visual Exploration of Gaussian Processes</a> by Görtler, Kehlbeck, and Deussen;
(b) <a href="https://idyll.pub/post/dimensionality-reduction-293e465c2a3443e8941b016d/" target="_blank">The Beginner's Guide to Dimensionality Reduction</a> by Conlen and Hohman;
(c) <a href="https://theo-jaunet.github.io/MemoryReduction/" target="_blank">What if we Reduce the Memory of an Artificial Doom Player?</a> by Jaunet, Vuillemot, and Wolf;
(d) <a href="https://tiga1231.github.io/umap-tour/" target="_blank">Comparing DNNs with UMAP Tour</a> by Li and Scheidegger;
(e) <a href="https://parametric.press/issue-01/the-myth-of-the-impartial-machine/" target="_blank">The Myth of the Impartial Machine</a> by Feng and Wu;
(f) <a href="http://formafluens.io/client/mix10.html">FormaFluens Data Experiment</a> by Strobelt, Phibbs, and Martino.
</div>
</p>
<h2 id="dates">Important Dates</h2>
<p><i>Note: Dates could be revised due to the ongoing <a href="https://www.cdc.gov/coronavirus/2019-ncov/index.html">COVID-19 outbreak</a>.</i></p>
<pre>
<s>July 30, 2021</s> August 6, 2021, anywhere: Explainables Submission
September 10, 2021: Author Notification
October 25 -- Workshop <s>in New Orleans</s> online at IEEE VIS 2021
</pre>
<!-- September 1, 2021: Camera-ready Copy for Accepted Submissions -->
<!-- September ?, 2021: VIS Registration for 2021 -->
<h2 id="program">Program Overview</h2>
<p>
All times in CDT (UTC -5) on Monday, October 25, 2021.
<br>
<br>→ To attend, register for free at <a href="http://ieeevis.org/year/2021/info/registration/conference-registration">IEEE VIS</a>.
<!-- <br>→ <a href="calendar/VISxAI2020.ics">Add to your calendar.</a> -->
<br>→ <a href="https://virtual.ieeevis.org/year/2021/session_a-visxai.html">Join the virtual even here!</a>
</p>
<table style="padding: 5pt;">
<tr>
<td class="schedule">12:00 -- 12:05</td>
<td><b>Welcome from the Organizers</b></td>
</tr>
<tr>
<td class="schedule">12:05 -- 1:00</td>
<td><b>Keynote: David Ha (Google) - <a href="https://twitter.com/hardmaru" target="_blank">@hardmaru</a></b>
<br>
<b>Using the Webpage as the Main Medium for Communicating Research Ideas</b>
<br>
While papers are the main means for communicating scientific results, both quantitative and qualitative, the machine learning community’s expectations have moved above and beyond the paper format. Machine learning models are expected to be ultimately used by people, in devices, computers, and other applications. In recent years we have witnessed the popularity of work published as web articles and interactive demos, enabling the reader to interact with machine learning models to experience the features and limitations of cutting edge methods. This comes with costs, as development and deployment of interactive websites consume time and energy from the researcher's point of view. In particular, the audience may find flaws in the model by interacting with it in ways unintended by the authors, who may simply wish to report a score against a benchmark. In this talk, I will discuss my own experiences developing these interactive web browser demos for my own research and others’ in the literature as a series of case studies. By the end of the talk, the audience will be familiar with the different approaches and their tradeoffs used in the development of web demos for research, to be able to assess whether it is something they wish to do for their own projects.
</td>
</tr>
<tr>
<td class="schedule">1:00 -- 1:30</td>
<td><b>Session I</b>
<br>
<a href="https://pair.withgoogle.com/explorables/fill-in-the-blank/">
What Have Language Models Learned?
</a>
-- Adam Pearce
<br>
<a href="http://www.cs.umd.edu/~amin/apps/visxai/sonification/">
Feature Sonification: An investigation on the features learned for Automatic Speech Recognition
</a>
-- Amin Ghiasi, Hamid Kazemi, W. Ronny Huang, Emily Liu, Micah Goldblum, Tom Goldstein
<br>
<a href="https://ruthcfong.github.io/projects/interactive_overlay/">
Interactive Similarity Overlays
</a>
-- Ruth Fong, Alexander Mordvintsev, Andrea Vedaldi, Chris Olah
<br>
</td>
</tr>
<tr>
<td class="schedule">1:30 -- 2:00</td>
<td><b>Break</b></td>
</tr>
<tr>
<td class="schedule">2:00 -- 2:30</td>
<td><b>Session II</b>
<br>
<a href="https://interactive-maml.github.io/">
An Interactive Introduction to Model-Agnostic Meta-Learning
</a>
-- Luis Müller, Max Ploner, Thomas Goerttler, Klaus Obermayer
<br>
<a href="https://bert-vs-gpt2.dbvis.de/">
Demystifying the Embedding Space of Language Models
</a>
-- Rebecca Kehlbeck, Rita Sevastjanova, Thilo Spinner, Tobias Stähle, Mennatallah El-Assady
<br>
<a href="https://xnought.github.io/backprop-explainer/">
Backprop Explainer: An Explanation with Interactive Tools
</a>
-- Donald Bertucci, Minsuk Kahng
<br>
</td>
</tr>
<tr>
<td class="schedule">2:30 -- 2:35</td>
<td><b>Project Pitch Videos</b></td>
</tr>
<tr>
<td class="schedule">2:35 -- 3:05</td>
<td><b>Session III</b>
<br>
<a href="https://unfair-machine.netlify.app/">
(Un)Fair Machine
</a>
-- Vu Luong
<br>
<a href="https://mlu-explain.github.io/">
Amazon's MLU-Explain: Interactive Explanations of Core Machine Learning Concepts
</a>
-- Jared Wilber, Jenny Yeon, Brent Werness
<br>
<a href="https://nipunbatra.github.io/hmm/">
Exploring Hidden Markov Model
</a>
-- Rithwik Kukunuri, Rishiraj Adhikary, Mahika Jaguste, Nipun Batra, Ashish Tendulkar
<br>
</td>
</tr>
<tr>
<td class="schedule">3:05 -- 3:10</td>
<td><b>Closing Session</b></td>
</tr>
<tr>
<td class="schedule">3:10 -- 5:00</td>
<td><b>VISxAI Eastcoast Party</b></td>
</tr>
</table>
<br>
<p><b>Project Pitch Videos</b></p>
<ul>
<li>
<a href="http://shreyaa.karyk.com/rain-check">
The Rain Check
</a>
-- Shreya Agrawal, Mukund Sundararajan
</li>
<li>
<a href="https://introduction-to-autoencoders.vercel.app/">
An Interactive Introduction to Autoencoders
</a>
-- Donald Bertucci
</li>
<li>
<a href="https://amaliepauli.github.io/SuperFairML/">
How Does the Computer Become a Just Superhero? A Review of Fairness in Machine Learning
</a>
-- Amalie Brogaard Pauli, Niklas Kasenburg
</li>
<li>
<a href="https://github.com/infovis-vt/AndromedaJupyter">
Andromeda in Jupyter: Interactive Inverse Dimension Reduction
</a>
-- Han Liu, Yali Bian, Chris North
</li>
<li>
<a href="https://dudaspm.github.io/LDA_Bias_Data/intro.html">
A Jupyter Book Approach to Latent Dirichlet Allocation Understanding
</a>
-- Pranav Narayanan Venkit, Patrick M Dudas
</li>
<li>
<a href="https://ichko.github.io/one-d-gan">
One-D GAN
</a>
-- Iliya Zhechev
</li>
</ul>
<br>
<h2 id="call">Call for Participation</h2>
<!-- <p><strong>SUBMISSION CLOSED</strong></p> -->
<!-- <p> -->
<!-- To make our work more accessible to the general audience, we are soliciting submissions in a novel format:
blog-style posts and jupyter-like notebooks. In addition we also accept position papers in a more
traditional form.
Please contact us, if you want to submit a original work in another format. Email: <a
href="mailto:[email protected]">orga.visxai at gmail.com</a> -->
<!-- </p> -->
<div class="submit-button">
<a href="/submit.html">Submission instructions</a>
</div>
<br>
<p>
Explainable submissions (e.g., interactive articles, markup, and notebooks) are the core element of the workshop, as this
workshop aims to be a platform for explanatory visualizations focusing
on AI techniques.
</p>
<p>
Authors have the freedom to use whatever templates and formats they like. However, the narrative has to be
visual and interactive, and walk readers through a keen understanding on the ML technique or application.
Authors may wish to write a <a href="https://distill.pub">Distill-style</a> blog post (format), interactive
<a href="https://idyll-lang.org/">Idyll</a> markup, or a <a href="http://jupyter.org">Jupyter</a> or <a
href="https://beta.observablehq.com/">Observable</a> notebook that integrates code, text, and
visualization to tell the story.
</p>
<p>
Here are a few examples of visual explanations of AI methods in these types of formats:
<ul>
<li>[interactive article]
<a href="https://distill.pub/2019/visual-exploration-gaussian-processes/" target="_blank">A Visual Exploration of
Gaussian Processes</a>
</li>
<li>[interactive article]
<a href="https://distill.pub/2017/momentum/" target="_blank">Why Momentum Really Works</a>
</li>
<li>[interactive article]
<a href="http://www.r2d3.us/visual-intro-to-machine-learning-part-1/" target="_blank">A Visual Introduction to Machine Learning</a>
</li>
<li>[interactive article]
<a href="http://formafluens.io/client/mix10.html" target="_blank">Art-Inspired Data Experiments on Neural Network Model Decay</a>
</li>
<li>[interactive article]
<a href="https://research.google.com/bigpicture/attacking-discrimination-in-ml/" target="_blank">Attacking Discrimination with Smarter Machine Learning</a>
</li>
<li>[markup]
<a href="https://parametric.press/issue-01/the-myth-of-the-impartial-machine/" target="_">The Myth of the Impartial Machine</a>
</li>
<li>[markup]
<a href="https://idyll.pub/post/visxai-dimensionality-reduction-1dbad0a67a092b007c526a45/" target="_">The Beginner's Guide to Dimensionality Reduction</a>
</li>
<li>[notebook]
<a href="https://beta.observablehq.com/@nstrayer/t-sne-explained-in-plain-javascript" target="_blank">t-SNE Explained in Plain JavaScript</a>
</li>
<li>[notebook]
<a href="https://observablehq.com/@nsthorat/how-to-build-a-teachable-machine-with-tensorflow-js?collection=@observablehq/explorables" target="_blank">How to build a Teachable Machine with TensorFlow.js</a>
</li>
<li>[notebook]
<a href="http://nbviewer.jupyter.org/github/agconti/kaggle-titanic/blob/master/Titanic.ipynb" target="_blank">Titanic Machine Learning from Disaster</a>
</li>
</ul>
</p>
<p>
While these examples are informative and excellent, we hope the
Visualization & ML community will think about ways to creatively expand on
such foundational work to explain AI methods using novel interactions
and visualizations often present at IEEE VIS.
Please contact us, if you want to submit a original work in another
format. Email: <a href="mailto:[email protected]">orga.visxai (at) gmail.com</a>.
</p>
<p>
Note: We also accept more traditional papers that accompany an explainable.
Be aware that we require that the explainable must stand on its own.
The reviewers will evaluate the explainable (and might chose to ignore the paper).
</p>
<p>
In previous years, the best works were invited to submit their extended work to the
online publishing platform distill.pub to generate a cite-able
publication for authors. See <a href="https://distill.pub/2019/visual-exploration-gaussian-processes/">https://distill.pub/2019/visual-exploration-gaussian-processes/</a>.
</p>
<h2 id="hall-of-fame">Hall of Fame</h2>
Each year we award one Best Submission and two Honorable Mentions. <i>Congrats to our winners!</i>
<br><br>
<h5>VISxAI 2020</h5>
<ul>
<li>
<a href="https://tiga1231.github.io/umap-tour/">
Comparing DNNs with UMAP Tour
</a> -- Mingwei Li and Carlos Scheidegger
</li>
<li>
<a href="https://www.pewresearch.org/interactives/how-does-a-computer-see-gender/">
How Does a Computer "See" Gender?
</a> -- Stefan Wojcik, Emma Remy, and Chris Baronavski
</li>
</ul>
<h5>VISxAI 2019</h5>
<ul>
<li>
<a href="https://theo-jaunet.github.io/MemoryReduction/">
What if we Reduce the Memory of an Artificial Doom Player?
</a> -- Theo Jaunet, Romain Vuillemot, and Christian Wolf
</li>
<li>
<a href="https://qnkxsovc.gitlab.io/prob-vis/">
Statistical Distances and Their Implications to GAN Training
</a> -- Max Daniels
</li>
<li>
<a href="https://mybinder.org/v2/gh/KrishnaswamyLab/visualization_selection/master?filepath=Selecting_the_right_tool_for_the_job.ipynb">
Selecting the right tool for the job: a comparison of visualization algorithms
</a> -- Daniel Burkhardt, Scott Gigante, and Smita Krishnaswamy
</li>
</ul>
<h5>VISxAI 2018</h5>
<ul>
<li>
<a href="https://www.jgoertler.com/visual-exploration-gaussian-processes/">
A Visual Exploration of Gaussian Processes
</a> -- Jochen Görtler, Rebecca Kehlbeck and Oliver Deussen
</li>
<li>
<a href="https://idyll.pub/post/visxai-dimensionality-reduction-1dbad0a67a092b007c526a45/">
The Beginner's Guide to Dimensionality Reduction
</a> -- Matthew Conlen and Fred Hohman
</li>
<li>
<a href="https://roadsfromabove.netlify.com/">
Roads from Above
</a> -- Greg More, Slaven Marusic and Caihao Cui
</li>
</ul>
<!-- <p> <strong>SUBMISSION CLOSED</strong></p> -->
<h2 id="orga">Organizers <span style="font-size: small">(alphabetic)</span>
</h2>
<p>
Adam Perer - Carnegie Mellon University<br />
<!-- Duen Horng (Polo) Chau - Georgia Tech<br /> -->
<!-- Fernanda Viégas - Google Brain<br /> -->
Fred Hohman - Apple<br />
Hendrik Strobelt - MIT-IBM Watson AI Lab<br />
Mennatallah El-Assady - ETH AI Center<br />
</p>
<h5>Steering Committee</h5>
<p>
Duen Horng (Polo) Chau - Georgia Tech<br />
Fernanda Viégas - Google Brain<br />
</p>
<h2 id="pc">Program Committee</h2>
<p>
Marco Angelini<br />
Jürgen Bernard<br />
Angie Boggust<br />
Nan Cao<br />
Marco Cavallo<br />
Jaegul Choo<br />
Tommy Dang<br />
Victor Dibia<br />
Angus Forbes<br />
Iris Howley<br />
Denis Parra<br />
Arjun Srinivasan<br />
Romain Vuillemot<br />
Yang Wang<br />
James Wexler<br />
</p>
<!-- Global site tag (gtag.js) - Google Analytics -->
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-119596896-1"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag() { dataLayer.push(arguments); }
gtag('js', new Date());
gtag('config', 'UA-119596896-1');
</script>
</div>
</body>
</html>