-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathindex.html
More file actions
326 lines (307 loc) · 13.9 KB
/
index.html
File metadata and controls
326 lines (307 loc) · 13.9 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<meta name="description"
content="Towards Neuro-Symbolic Video Understanding">
<meta name="keywords" content="video, understanding, reasoning, neuro-symbolic, ai, temporal, logic, formal, methods">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>Towards Neuro-Symbolic Video Understanding</title>
<link href="https://fonts.googleapis.com/css?family=Google+Sans|Noto+Sans|Castoro"
rel="stylesheet">
<link rel="stylesheet" href="./static/css/bulma.min.css">
<link rel="stylesheet" href="./static/css/bulma-carousel.min.css">
<link rel="stylesheet" href="./static/css/bulma-slider.min.css">
<link rel="stylesheet" href="./static/css/fontawesome.all.min.css">
<link rel="stylesheet" href="https://cdn.jsdelivr.net/gh/jpswalsh/academicons@1/css/academicons.min.css">
<link rel="stylesheet" href="./static/css/index.css">
<link rel="icon" href="./static/images/favicon.png" type="image/png">
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.5.1/jquery.min.js"></script>
<script defer src="./static/js/fontawesome.all.min.js"></script>
<script src="./static/js/bulma-carousel.min.js"></script>
<script src="./static/js/bulma-slider.min.js"></script>
<script src="./static/js/index.js"></script>
<style>
@media only screen and (max-width: 768px) {
body {
margin: 20px;
}
}
</style>
</head>
<body>
<nav class="navbar" role="navigation" aria-label="main navigation">
<div class="navbar-brand">
<a role="button" class="navbar-burger" aria-label="menu" aria-expanded="false">
<span aria-hidden="true"></span>
<span aria-hidden="true"></span>
<span aria-hidden="true"></span>
</a>
</div>
<div class="navbar-menu">
<div class="navbar-start" style="flex-grow: 1; justify-content: center;">
<a class="navbar-item" href="https://utaustin-swarmlab.github.io/">
<span class="icon">
<i class="fas fa-home"></i>
</span>
</a>
<div class="navbar-item has-dropdown is-hoverable">
<a class="navbar-link">
More Research
</a>
<div class="navbar-dropdown">
<a class="navbar-item" href="https://utaustin-swarmlab.github.io/nsvs/">
NSVS
</a>
<a class="navbar-item" href="https://utaustin-swarmlab.github.io/NeuS-V/">
NeuS-V
</a>
<a class="navbar-item" href="https://utaustin-swarmlab.github.io/NeuS-QA/">
NeuS-QA
</a>
</div>
</div>
</div>
</div>
</nav>
<section class="hero">
<div class="hero-body">
<div class="container is-max-desktop">
<div class="columns is-centered">
<div class="column has-text-centered">
<h1 class="title is-1 publication-title">Towards Neuro-Symbolic Video Understanding</h1>
<div class="is-size-5 publication-authors">
<span class="author-block">
Minkyu Choi<sup>1,2</sup>,</span>
<span class="author-block">
Harsh Goel<sup>† 1,2</sup>,</span>
<span class="author-block">
Mohammad Omama<sup>† 1,2</sup>,
</span>
<span class="author-block">
Yunhao Yang<sup>1</sup>,
</span>
<span class="author-block">
Sahil Shah<sup>1,2</sup>,
</span>
<span class="author-block">
Sandeep Chinchali<sup>1,2</sup>
</span>
</div>
<div class="is-size-5 publication-authors">
<span class="author-block"><sup>1</sup>The University of Texas at Austin</span>
<span class="author-block"><sup>2</sup><a href="https://utaustin-swarmlab.github.io/">UT Swarm Lab</a></span>
</div>
<div class="is-size-5 publication-authors">
<span class="author-block">†Contributed equally to this work</span>
</div>
<div class="column has-text-centered">
<div class="publication-links">
<!-- PDF Link. -->
<span class="link-block">
<a href="https://link.springer.com/chapter/10.1007/978-3-031-73229-4_13"
class="external-link button is-normal is-rounded is-dark">
<span class="icon">
<i class="fas fa-file-pdf"></i>
</span>
<span>Paper</span>
</a>
</span>
<span class="link-block">
<a href="https://arxiv.org/abs/2403.11021"
class="external-link button is-normal is-rounded is-dark">
<span class="icon">
<i class="ai ai-arxiv"></i>
</span>
<span>arXiv</span>
</a>
</span>
<!-- Code Link. -->
<span class="link-block">
<a href="https://github.com/UTAustin-SwarmLab/Neuro-Symbolic-Video-Search-Temporal-Logic"
class="external-link button is-normal is-rounded is-dark">
<span class="icon">
<i class="fab fa-github"></i>
</span>
<span>Source Code</span>
</a>
</span>
<span class="link-block">
<a href="https://github.com/UTAustin-SwarmLab/Temporal-Logic-Video-Dataset"
class="external-link button is-normal is-rounded is-dark">
<span class="icon">
<i class="fab fa-github"></i>
</span>
<span>TLV Dataset</span>
</a>
</span>
</div>
</div>
</div>
</div>
</div>
</section>
<section class="section">
<div class="container is-max-desktop">
<div class="rows is-centered has-text-centered">
<video autoplay muted loop playsinline height="100%">
<source src="static/videos/flying_caption.mp4" type="video/mp4">
</video>
<br><br>
<h2 class="title is-3">Abstract</h2>
<div class="content has-text-justified">
<p>
The unprecedented surge in video data production in recent years necessitates efficient tools to extract meaningful frames from videos for downstream tasks. Long-term temporal reasoning is a key desideratum for frame retrieval systems. While state-of-the-art foundation models, like VideoLLaMA and ViCLIP, are proficient in short-term semantic understanding, they surprisingly fail at long-term reasoning across frames. A key reason for this failure is that they intertwine per-frame perception and temporal reasoning into a single deep network. Hence, decoupling but co-designing the semantic understanding and temporal reasoning is essential for efficient scene identification. We propose a system that leverages vision-language models for semantic understanding of individual frames but effectively reasons about the long-term evolution of events using state machines and temporal logic (TL) formulae that inherently capture memory. Our TL-based reasoning improves the F1 score of complex event identification by 9-15% compared to benchmarks that use GPT-4 for reasoning on state-of-the-art self-driving datasets such as Waymo and NuScenes. The source code is available on <a href="https://github.com/UTAustin-SwarmLab/Neuro-Symbolic-Video-Search-Temporal-Logic">Github</a>.
</div>
</div>
</div>
</section>
<div class="container is-max-desktop">
<div class="rows is-centered">
<div class="row">
<h2 class="title is-3 has-text-centered">Methodology</h2>
</div>
<br>
<div class="row">
<p>
We introduce a novel way to identify scenes of interest using a neuro-symbolic approach. Given video streams or clips alongside the temporal logic specification Φ, Neuro-Symbolic Visual Search with Temporal Logic (NSVS-TL) identifies scenes of interest.
</p>
</div>
<br>
<div class="columns is-centered has-text-centered">
<div class="column is-four-fifths">
<img src="static/images/fig1_teaser.png" alt="Method Overview" class="zoomable-image">
</div>
</div>
<p>
<ul class="dashed-list">
<li> <b>Step 1: </b>We calibrate the confidence of neural perception models to ensure precise object detection. This calibration enables the detection of relevant propositions in a given frame to construct a probabilistic automaton.
</li>
<li> <b> Step 2: </b>Subsequently, each frame undergoes a validation process utilizing two distinct validation functions. This step ensures that only frames containing relevant visual information proceed to the next phase of the method.
</li>
<li> <b> Step 3: </b>Upon validation, we construct a probabilistic automaton dynamically to encapsulate the temporal and logical relations between successive frames.
</li>
<li> <b> Step 4: </b>Finally, we utilize a model-checking method to determine whether a constructed automaton satisfies the temporal logic specification. If an automaton passes this check, then a sequence of frames within the automaton is identified as a scene of interest by the given temporal logic specification.
</li>
</ul>
</p>
</div>
<br>
<div class="columns is-centered has-text-centered">
<div class="column is-four-fifths">
<h3 class="title is-4">Autonomous Driving Example</h3>
<video autoplay muted loop playsinline height="100%">
<source src="static/videos/autonous_driving_demo.mp4" type="video/mp4">
</video>
</div>
</div>
</div>
<br><br>
<div class="container is-max-desktop">
<div class="row">
<h2 class="title is-3 has-text-centered">Key Capabilities</h2>
</div>
<br>
<div class="columns is-centered">
<div class="column">
<div class="content">
<h3 class="title is-4 has-text-centered">Long Horizon Video Understanding</h3>
<p>
We evaluate multi-event sequences with temporally extended gaps which have a large impact on video length. We observe the consistency with videos spanning up to 40 minutes, indicating reliability in handling long videos.
</p>
</div>
</div>
<div class="column">
<div class="columns is-centered">
<div class="column content">
<h3 class="title is-4 has-text-centered">Plug In Your Own Model</h3>
<p>
Our framework allows for the integration of any neural perceptual model, enhancing the capability to understand videos. This enables us to localize frames of interest with respect to queries.
</p>
</div>
</div>
</div>
</div>
<div class="columns is-centered">
<div class="column">
<div class="content">
<img src="static/images/fig5b_performance_in_durations.png" class="zoomable-image">
</div>
</div>
<div class="column">
<div class="columns is-centered">
<div class="column content">
<img src="static/images/fig5a_performance_different_nn.png" class="zoomable-image">
</div>
</div>
</div>
</div>
<h3 class="title is-4 has-text-centered">Comparison to Benchmark</h3>
<p>
From the experiments, we observe that NSVS-TL with various neural
perception models performs differently depending on the complexity of the
TL specification and datasets. Using the datasets, we see that for single
event scenarios, both our method and LLM-based reasoning perform reasonably
well since these events do not require complex reasoning whereas for
multi-event scenarios, our TL-based reasoning outperforms all LLM-based
baselines.
</p>
<br>
<img src="static/images/fig6_performance_result.png" class="zoomable-image">
</div>
<br>
<section class="section" id="BibTeX">
<div class="container is-max-desktop content">
<h2 class="title">BibTeX</h2>
<pre><code>@inproceedings{Choi_2024_ECCV,
author = {Choi, Minkyu and Goel, Harsh and Omama, Mohammad and Yang, Yunhao and Shah, Sahil and Chinchali, Sandeep},
title = {Towards Neuro-Symbolic Video Understanding},
journal = {Proceedings of the European Conference on Computer Vision (ECCV)},
month = {September},
year = {2024},
}</code></pre>
</div>
</section>
<footer class="footer">
<div class="container">
<div class="columns is-centered">
<div class="column is-8">
<div class="content has-text-centered">
<p>
Website source based on <a href="https://github.com/nerfies/nerfies.github.io">this source code</a>.
</p>
</div>
</div>
</div>
</div>
</footer>
<script>
document.addEventListener("DOMContentLoaded", () => {
const images = document.querySelectorAll(".zoomable-image");
const zoomOverlay = document.createElement("div");
zoomOverlay.classList.add("zoom-overlay");
const zoomImage = document.createElement("img");
zoomOverlay.appendChild(zoomImage);
document.body.appendChild(zoomOverlay);
images.forEach((img) => {
const highResSrc = img.getAttribute("data-highres");
if (highResSrc) {
const preloadImg = new Image();
preloadImg.src = highResSrc;
}
img.addEventListener("click", () => {
zoomImage.src = highResSrc || img.src;
zoomOverlay.classList.add("active");
});
});
zoomOverlay.addEventListener("click", () => {
zoomOverlay.classList.remove("active");
zoomImage.src = "";
});
document.addEventListener("keydown", (e) => {
if (e.key === "Escape") zoomOverlay.classList.remove("active");
});
});
</script>
</body>
</html>