-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathindex.html
166 lines (140 loc) · 7.4 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
<!DOCTYPE html>
<html lang="en-us">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="theme-color" content="#2962ff">
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/academicons/1.8.6/css/academicons.min.css" integrity="sha256-uFVgMKfistnJAfoCUQigIl+JfUaP47GrRKjf6CTPVmw=" crossorigin="anonymous">
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/5.12.0-1/css/all.min.css" integrity="sha256-4w9DunooKSr3MFXHXWyFER38WmPdm361bQS/2KUWZbU=" crossorigin="anonymous">
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/fancybox/3.5.7/jquery.fancybox.min.css" integrity="sha256-Vzbj7sDDS/woiFS3uNKo8eIuni59rjyNGtXfstRzStA=" crossorigin="anonymous">
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/leaflet/1.5.1/leaflet.css" integrity="sha256-SHMGCYmST46SoyGgo4YR/9AlK1vf3ff84Aq9yK4hdqM=" crossorigin="anonymous">
<link rel="stylesheet" href="/academic.css">
<title>Eunji Chong</title>
</head>
<body id="top" data-spy="scroll" data-offset="70" data-target="#navbar-main" >
<div class="container">
<div class="row">
<div class="col-12 col-lg-4">
<div id="profile">
<img class="avatar avatar-circle" src="/profile.jpg" alt="Avatar">
<div class="portrait-title">
<h2>Eunji Chong, PhD</h2>
</div>
<ul class="network-icon" aria-hidden="true">
<li>
<a href="https://scholar.google.com/citations?user=Pb5N0xEAAAAJ&hl=en" target="_blank" rel="noopener">
<i class="ai ai-google-scholar big-icon"></i>
</a>
</li>
<li>
<a href="https://github.com/ejcgt" target="_blank" rel="noopener">
<i class="fab fa-github big-icon"></i>
</a>
</li>
<li>
<a href="https://drive.google.com/file/d/1CPM9tB7sPglFHfV2ggGH0UmAPiScO43o/view?usp=sharing" target="_blank" rel="noopener">
<i class="ai ai-cv big-icon"></i>
</a>
</li>
</ul>
</div>
</div>
<div class="col-12 col-lg-8">
<p> I am working at Amazon as Applied Scientist.
<br>
I received my Ph.D. from the <a href="https://ic.gatech.edu/" target="_blank" rel="noopener">School of Interactive Computing</a>
at <a href="https://gatech.edu" target="_blank" rel="noopener">Georgia Tech</a> in 2020,
where I worked under the supervision of Professor <a href="https://rehg.org" target="_blank" rel="noopener">James M. Rehg</a>.
During PhD, my research has focused on developing computer vision methods for measuring and interpreting visual attention in social contexts,
in order to better model and understand human behavior.
</p>
<div class="row">
<div class="col-md-6">
<h3>Education</h3>
<ul class="ul-edu fa-ul">
<li>
<i class="fa-li fas fa-graduation-cap"></i>
<div class="description">
<p class="course">Ph.D. in Computer Science, 2020</p>
<p class="institution">Georgia Institute of Technology</p>
</div>
</li>
<li>
<i class="fa-li fas fa-graduation-cap"></i>
<div class="description">
<p class="course">B.S. in Computer Science, 2012</p>
<p class="institution">Yonsei University</p>
</div>
</li>
</ul>
</div>
</div>
</div>
</div>
</div>
<section style="padding: 20px 0 20px 0;">
<div class="container">
<div class="row">
<div class="col-lg-12">
<h1>Selected Publications</h1>
<h2>For a complete list, refer to my CV or Google Scholar page linked above</h2>
</div>
<div class="row">
<ul class="ul-papers">
<li>
<div class="description">
<p class="title">Detecting Attended Visual Targets in Video</p>
<p class="authors"><u>Eunji Chong</u>, Yongxin Wang, Nataniel Ruiz, James M. Rehg</p>
<p class="venue">CVPR 2020 <span style="font-weight:bolder;color:#BB2222"></span></p>
<p class="resources">
[ <a href="https://openaccess.thecvf.com/content_CVPR_2020/papers/Chong_Detecting_Attended_Visual_Targets_in_Video_CVPR_2020_paper.pdf" target="_blank" rel="noopener">paper</a> ]
[ <a href="https://github.com/ejcgt/attention-target-detection" target="_blank" rel="noopener">code</a> ]
[ <a href="https://github.com/ejcgt/attention-target-detection#dataset-1" target="_blank" rel="noopener">dataset</a> ]
[ <a href="papers/cvpr20bib.html" target="_blank" rel="noopener">bibtex</a> ]
</p>
</div>
</li>
<li>
<div class="description">
<p class="title">Detection of eye contact with deep neural networks is as accurate as human experts</p>
<p class="authors"><u>Eunji Chong</u>, et. al.</p>
<p class="venue">Nature Communications<span style="font-weight:bolder;color:#BB2222"></span></p>
<p class="resources">
[ <a href="https://www.nature.com/articles/s41467-020-19712-x" target="_blank" rel="noopener">paper</a> ]
[ <a href="https://github.com/ejcgt/eye-contact-cnn" target="_blank" rel="noopener">code</a> ]
[ <a href="papers/natcommbib.html" target="_blank" rel="noopener">bibtex</a> ]
</p>
</div>
</li>
<li>
<div class="description">
<p class="title">Connecting Gaze, Scene, and Attention: Generalized Attention Estimation via Joint Modeling of Gaze and Scene Saliency</p>
<p class="authors"><u>Eunji Chong</u>, Nataniel Ruiz, Yongxin Wang, Yun Zhang, Agata Rozga, James M. Rehg</p>
<p class="venue">ECCV 2018 <span style="font-weight:bolder;color:#BB2222"></span></p>
<p class="resources">
[ <a href="https://openaccess.thecvf.com/content_ECCV_2018/papers/Eunji_Chong_Connecting_Gaze_Scene_ECCV_2018_paper.pdf" target="_blank" rel="noopener">paper</a> ]
[ <a href="https://github.com/ejcgt/attention-target-detection#dataset" target="_blank" rel="noopener">annotation</a> ]
[ <a href="papers/eccv18bib.html" target="_blank" rel="noopener">bibtex</a> ]
</p>
</div>
</li>
<li>
<div class="description">
<p class="title">Fine-Grained Head Pose Estimation Without Keypoints</p>
<p class="authors">Nataniel Ruiz, <u>Eunji Chong</u>, James M. Rehg</p>
<p class="venue">Workshop on Analysis and Modeling of Faces and Gestures at CVPR 2018<span style="font-weight:bolder;color:#BB2222"></span></p>
<p class="resources">
[ <a href="https://openaccess.thecvf.com/content_cvpr_2018_workshops/papers/w41/Ruiz_Fine-Grained_Head_Pose_CVPR_2018_paper.pdf" target="_blank" rel="noopener">paper</a> ]
[ <a href="https://github.com/natanielruiz/deep-head-pose" target="_blank" rel="noopener">code</a> ]
[ <a href="papers/cvprw18bib.html" target="_blank" rel="noopener">bibtex</a> ]
</p>
</div>
</li>
</ul>
</div>
</div>
</div>
</section>
</body>
</html>