-
Notifications
You must be signed in to change notification settings - Fork 3
Expand file tree
/
Copy pathchallenge.html
More file actions
981 lines (818 loc) · 49.1 KB
/
challenge.html
File metadata and controls
981 lines (818 loc) · 49.1 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
<!DOCTYPE html>
<html lang="en" data-content_root="../" >
<head>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" /><meta name="viewport" content="width=device-width, initial-scale=1" />
<title>ICML Challenge 2024 — TopoX documentation</title>
<script data-cfasync="false">
document.documentElement.dataset.mode = localStorage.getItem("mode") || "";
document.documentElement.dataset.theme = localStorage.getItem("theme") || "light";
</script>
<!-- Loaded before other Sphinx assets -->
<link href="../_static/styles/theme.css?digest=8d27b9dea8ad943066ae" rel="stylesheet" />
<link href="../_static/styles/bootstrap.css?digest=8d27b9dea8ad943066ae" rel="stylesheet" />
<link href="../_static/styles/pydata-sphinx-theme.css?digest=8d27b9dea8ad943066ae" rel="stylesheet" />
<link href="../_static/vendor/fontawesome/6.5.1/css/all.min.css?digest=8d27b9dea8ad943066ae" rel="stylesheet" />
<link rel="preload" as="font" type="font/woff2" crossorigin href="../_static/vendor/fontawesome/6.5.1/webfonts/fa-solid-900.woff2" />
<link rel="preload" as="font" type="font/woff2" crossorigin href="../_static/vendor/fontawesome/6.5.1/webfonts/fa-brands-400.woff2" />
<link rel="preload" as="font" type="font/woff2" crossorigin href="../_static/vendor/fontawesome/6.5.1/webfonts/fa-regular-400.woff2" />
<link rel="stylesheet" type="text/css" href="../_static/pygments.css?v=a746c00c" />
<link rel="stylesheet" type="text/css" href="../_static/sg_gallery.css?v=61a4c737" />
<!-- Pre-loaded scripts that we'll load fully later -->
<link rel="preload" as="script" href="../_static/scripts/bootstrap.js?digest=8d27b9dea8ad943066ae" />
<link rel="preload" as="script" href="../_static/scripts/pydata-sphinx-theme.js?digest=8d27b9dea8ad943066ae" />
<script src="../_static/vendor/fontawesome/6.5.1/js/all.min.js?digest=8d27b9dea8ad943066ae"></script>
<script src="../_static/documentation_options.js?v=5929fcd5"></script>
<script src="../_static/doctools.js?v=9a2dae69"></script>
<script src="../_static/sphinx_highlight.js?v=dc90522c"></script>
<script crossorigin="anonymous" integrity="sha256-Ae2Vz/4ePdIu6ZyI/5ZGsYnb+m0JlOmKPjt6XZ9JJkA=" src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.4/require.min.js"></script>
<script>DOCUMENTATION_OPTIONS.pagename = 'packs/challenge';</script>
<link rel="canonical" href="pyt-team.github.io/packs/challenge.html" />
<link rel="index" title="Index" href="../genindex.html" />
<link rel="search" title="Search" href="../search.html" />
<link rel="next" title="Pyt-Team" href="about.html" />
<link rel="prev" title="TopoEmbedX (TEX)" href="index_tex.html" />
<meta name="viewport" content="width=device-width, initial-scale=1"/>
<meta name="docsearch:language" content="en"/>
<meta name="docbuild:last-update" content="Jul 28, 2024, 3:47:26 PM"/>
</head>
<body data-bs-spy="scroll" data-bs-target=".bd-toc-nav" data-offset="180" data-bs-root-margin="0px 0px -60%" data-default-mode="">
<a id="pst-skip-link" class="skip-link" href="#main-content">Skip to main content</a>
<div id="pst-scroll-pixel-helper"></div>
<button type="button" class="btn rounded-pill" id="pst-back-to-top">
<i class="fa-solid fa-arrow-up"></i>
Back to top
</button>
<input type="checkbox"
class="sidebar-toggle"
name="__primary"
id="__primary"/>
<label class="overlay overlay-primary" for="__primary"></label>
<input type="checkbox"
class="sidebar-toggle"
name="__secondary"
id="__secondary"/>
<label class="overlay overlay-secondary" for="__secondary"></label>
<div class="search-button__wrapper">
<div class="search-button__overlay"></div>
<div class="search-button__search-container">
<form class="bd-search d-flex align-items-center"
action="../search.html"
method="get">
<i class="fa-solid fa-magnifying-glass"></i>
<input type="search"
class="form-control"
name="q"
id="search-input"
placeholder="Search the docs ..."
aria-label="Search the docs ..."
autocomplete="off"
autocorrect="off"
autocapitalize="off"
spellcheck="false"/>
<span class="search-button__kbd-shortcut"><kbd class="kbd-shortcut__modifier">Ctrl</kbd>+<kbd>K</kbd></span>
</form></div>
</div>
<header class="bd-header navbar navbar-expand-lg bd-navbar">
<div class="bd-header__inner bd-page-width">
<label class="sidebar-toggle primary-toggle" for="__primary">
<span class="fa-solid fa-bars"></span>
</label>
<div class="col-lg-3 navbar-header-items__start">
<div class="navbar-item">
<a class="navbar-brand logo" href="../index.html">
<p class="title logo__title">TopoX documentation</p>
</a></div>
</div>
<div class="col-lg-9 navbar-header-items">
<div class="me-auto navbar-header-items__center">
<div class="navbar-item">
<nav class="navbar-nav">
<ul class="bd-navbar-elements navbar-nav">
<li class="nav-item">
<a class="nav-link nav-internal" href="index_tnx.html">
TopoNetX (TNX)
</a>
</li>
<li class="nav-item">
<a class="nav-link nav-internal" href="index_tmx.html">
TopoModelX (TMX)
</a>
</li>
<li class="nav-item">
<a class="nav-link nav-internal" href="index_tex.html">
TopoEmbedX (TEX)
</a>
</li>
<li class="nav-item current active">
<a class="nav-link nav-internal" href="#">
ICML Challenge 2024
</a>
</li>
<li class="nav-item">
<a class="nav-link nav-internal" href="about.html">
Pyt-Team
</a>
</li>
</ul>
</nav></div>
</div>
<div class="navbar-header-items__end">
<div class="navbar-item navbar-persistent--container">
<script>
document.write(`
<button class="btn navbar-btn search-button-field search-button__button" title="Search" aria-label="Search" data-bs-placement="bottom" data-bs-toggle="tooltip">
<i class="fa-solid fa-magnifying-glass"></i>
<span class="search-button__default-text">Search</span>
<span class="search-button__kbd-shortcut"><kbd class="kbd-shortcut__modifier">Ctrl</kbd>+<kbd class="kbd-shortcut__modifier">K</kbd></span>
</button>
`);
</script>
</div>
<div class="navbar-item">
<script>
document.write(`
<button class="btn btn-sm navbar-btn theme-switch-button" title="light/dark" aria-label="light/dark" data-bs-placement="bottom" data-bs-toggle="tooltip">
<span class="theme-switch nav-link" data-mode="light"><i class="fa-solid fa-sun fa-lg"></i></span>
<span class="theme-switch nav-link" data-mode="dark"><i class="fa-solid fa-moon fa-lg"></i></span>
<span class="theme-switch nav-link" data-mode="auto"><i class="fa-solid fa-circle-half-stroke fa-lg"></i></span>
</button>
`);
</script></div>
</div>
</div>
<div class="navbar-persistent--mobile">
<script>
document.write(`
<button class="btn navbar-btn search-button-field search-button__button" title="Search" aria-label="Search" data-bs-placement="bottom" data-bs-toggle="tooltip">
<i class="fa-solid fa-magnifying-glass"></i>
<span class="search-button__default-text">Search</span>
<span class="search-button__kbd-shortcut"><kbd class="kbd-shortcut__modifier">Ctrl</kbd>+<kbd class="kbd-shortcut__modifier">K</kbd></span>
</button>
`);
</script>
</div>
<label class="sidebar-toggle secondary-toggle" for="__secondary" tabindex="0">
<span class="fa-solid fa-outdent"></span>
</label>
</div>
</header>
<div class="bd-container">
<div class="bd-container__inner bd-page-width">
<div class="bd-sidebar-primary bd-sidebar hide-on-wide">
<div class="sidebar-header-items sidebar-primary__section">
<div class="sidebar-header-items__center">
<div class="navbar-item">
<nav class="navbar-nav">
<ul class="bd-navbar-elements navbar-nav">
<li class="nav-item">
<a class="nav-link nav-internal" href="index_tnx.html">
TopoNetX (TNX)
</a>
</li>
<li class="nav-item">
<a class="nav-link nav-internal" href="index_tmx.html">
TopoModelX (TMX)
</a>
</li>
<li class="nav-item">
<a class="nav-link nav-internal" href="index_tex.html">
TopoEmbedX (TEX)
</a>
</li>
<li class="nav-item current active">
<a class="nav-link nav-internal" href="#">
ICML Challenge 2024
</a>
</li>
<li class="nav-item">
<a class="nav-link nav-internal" href="about.html">
Pyt-Team
</a>
</li>
</ul>
</nav></div>
</div>
<div class="sidebar-header-items__end">
<div class="navbar-item">
<script>
document.write(`
<button class="btn btn-sm navbar-btn theme-switch-button" title="light/dark" aria-label="light/dark" data-bs-placement="bottom" data-bs-toggle="tooltip">
<span class="theme-switch nav-link" data-mode="light"><i class="fa-solid fa-sun fa-lg"></i></span>
<span class="theme-switch nav-link" data-mode="dark"><i class="fa-solid fa-moon fa-lg"></i></span>
<span class="theme-switch nav-link" data-mode="auto"><i class="fa-solid fa-circle-half-stroke fa-lg"></i></span>
</button>
`);
</script></div>
</div>
</div>
<div class="sidebar-primary-items__end sidebar-primary__section">
</div>
<div id="rtd-footer-container"></div>
</div>
<main id="main-content" class="bd-main">
<div class="bd-content">
<div class="bd-article-container">
<div class="bd-header-article">
<div class="header-article-items header-article__inner">
<div class="header-article-items__start">
<div class="header-article-item">
<nav aria-label="Breadcrumb">
<ul class="bd-breadcrumbs">
<li class="breadcrumb-item breadcrumb-home">
<a href="../index.html" class="nav-link" aria-label="Home">
<i class="fa-solid fa-home"></i>
</a>
</li>
<li class="breadcrumb-item active" aria-current="page">ICML Challenge 2024</li>
</ul>
</nav>
</div>
</div>
</div>
</div>
<div id="searchbox"></div>
<article class="bd-article">
<section id="icml-challenge-2024">
<h1>ICML Topological Deep Learning Challenge 2024: Beyond the Graph Domain<a class="headerlink" href="#icml-challenge-2024" title="Link to this heading">#</a></h1>
<p>Welcome to the Topological Deep Learning Challenge 2024: <em>Beyond
the Graph Domain</em>, jointly organized by <a class="reference external" href="https://www.tagds.com">TAG-DS</a>
& PyT-Team and hosted by the <a class="reference external" href="https://gram-workshop.github.io">Geometry-grounded Representation
Learning and Generative Modeling (GRaM) Workshop</a> at ICML 2024.</p>
<div class="admonition seealso">
<p class="admonition-title">See also</p>
<p>Link to the challenge repository: <a class="github reference external" href="https://github.com/pyt-team/challenge-icml-2024">pyt-team/challenge-icml-2024</a>.</p>
</div>
<p><em>Organizers, reviewers, and contributors:</em> Guillermo Bernárdez, Lev Telyatnikov, Marco Montagna, Federica Baccini,
Nina Miolane, Mathilde Papillon, Miquel Ferriol-Galmés, Mustafa Hajij, Theodore Papamarkou, Johan Mathe, Audun Myers,
Scott Mahan, Olga Zaghen, Maria Sofia Bucarelli, Hansen Lillemark, Sharvaree Vadgama, Erik Bekkers, Tim Doster, Tegan Emerson,
Henry Kvinge.</p>
</section>
<section id="winners">
<h1>Winners<a class="headerlink" href="#winners" title="Link to this heading">#</a></h1>
<section id="st-category">
<h2>🏆 1st Category<a class="headerlink" href="#st-category" title="Link to this heading">#</a></h2>
<p>🥇 1st-place, <strong>PR 63: Random Latent Clique Lifting</strong> (Graph to Simplicial); by Mauricio Tec, Claudio Battiloro, George Dasoulas</p>
<p>🥈 2nd-place, <strong>PR 58: Hypergraph Heat Kernel Lifting</strong> (Hypergraph to Simplicial); by Matt Piekenbrock</p>
<p>🥉 3rd-place, <strong>PR 11: DnD Lifting</strong> (Graph to Simplicial); by Jonas Verhellen</p>
</section>
<section id="nd-category">
<h2>🏆 2nd Category<a class="headerlink" href="#nd-category" title="Link to this heading">#</a></h2>
<p>🥇 1st-place, <strong>PR 57: Simplicial Paths Lifting</strong> (Graph to Combinatorial); by Manuel Lecha, Andrea Cavallo, Claudio Battiloro</p>
<p>🥈 2nd-place, <strong>PR 32: Matroid Lifting</strong> (Graph to Combinatorial); by Giordan Escalona</p>
<p>🥉 3rd-place, <strong>PR 33: Forman-Ricci Curvature Coarse Geometry Lifting</strong> (Graph to Hypergraph); by Michael Banf, Dominik Filipiak, Max Schattauer, Liliya Imasheva</p>
</section>
<section id="rd-category">
<h2>🏆 3rd Category<a class="headerlink" href="#rd-category" title="Link to this heading">#</a></h2>
<p>🥇 1st-place, <strong>PR 53: PointNet++ Lifting</strong> (Pointcloud to Hypergraph); by Julian Suk, Patryk Rygiel</p>
<p>🥈 2nd-place, <strong>PR 30: Kernel Lifting</strong> (Graph to Hypergraph); by Alexander Nikitin</p>
<p>🥉 3rd-place, <strong>PR 45: Mixture of Gaussians + MST Lifting</strong> (Pointcloud to Hypergraph); by Sebastian Mežnar, Boshko Koloski, Blaž Škrlj</p>
</section>
<section id="th-category">
<h2>🏆 4th Category<a class="headerlink" href="#th-category" title="Link to this heading">#</a></h2>
<p>🥇 1st-place, <strong>PR 32: Matroid Lifting</strong> (Graph to Combinatorial); by Giordan Escalona</p>
<p>🥈 2nd-place, <strong>PR 33: Forman-Ricci Curvature Coarse Geometry Lifting</strong> (Graph to Hypergraph); by Michael Banf, Dominik Filipiak, Max Schattauer, Liliya Imasheva</p>
<p>🥉 3rd-place, <strong>PR 58: Hypergraph Heat Kernel Lifting</strong> (Hypergraph to Simplicial); by Matt Piekenbrock</p>
</section>
<section id="honorable-mentions">
<h2>🏆 Honorable Mentions<a class="headerlink" href="#honorable-mentions" title="Link to this heading">#</a></h2>
<ul class="simple">
<li><p>⭐ <strong>Great Contributors</strong> ⭐</p>
<ul>
<li><p><strong>Martin Carrasco</strong> (PRs 28, 29, 41, 50)</p></li>
<li><p><strong>Bertran Miquel-Oliver, Manel Gil-Sorribes, Alexis Molina, Victor Guallar</strong> (PRs 14, 16, 21, 37, 42)</p></li>
<li><p><strong>Theodore Long</strong> (PRs 22, 35, 65)</p></li>
<li><p><strong>Jonas Verhellen</strong> (PRs 5, 7, 8, 10, 11)</p></li>
<li><p><strong>Pavel Snopov</strong> (PRs 6, 9, 18, 20)</p></li>
<li><p><strong>Julian Suk, Patryk Rygiel</strong> (PRs 23, 34, 53)</p></li>
</ul>
</li>
<li><p>🎖️ <strong>Highlighted Submissions</strong> 🎖️</p>
<ul>
<li><p><strong>PR 49: Modularity Maximization Lifting</strong> (Graph to Hypergraph); by Valentina Sánchez</p></li>
<li><p><strong>PR 47: Universal Strict Lifting</strong> (Hypergraph to Combinatorial); by Álvaro Martinez</p></li>
<li><p><strong>PR 48: Mapper Lifting</strong> (Graph to Hypergraph); by Halley Fritze, Marissa Masden</p></li>
</ul>
</li>
</ul>
</section>
</section>
<section id="motivation">
<h1>Motivation<a class="headerlink" href="#motivation" title="Link to this heading">#</a></h1>
<p>In the field of Topological Deep Learning (TDL), one of the primary objectives revolves around developing deep learning models tailored for data supported on topological domains, including simplicial complexes, cell complexes, and hypergraphs. These domains encapsulate diverse structures encountered in scientific computations. Naturally, topological domains serve as a means to represent higher-order interactions inherent in any complex system, such as social connections within communities, molecular structures and reactions, n-body interactions, among others. Specifically, TDL techniques facilitate the encoding of higher-order relationships utilizing algebraic topology principles;
<a class="reference internal" href="#fig-domains"><span class="std std-numref">Fig. 1</span></a> illustrates the standard topological domains used
to that end.</p>
<figure class="align-default" id="fig-domains">
<img alt="../_images/domain_categories_with_relations.png" src="../_images/domain_categories_with_relations.png" />
<figcaption>
<p><span class="caption-number">Fig. 1 </span><span class="caption-text">Domains of Topological Deep Learning. Figure from <a class="reference external" href="https://arxiv.org/abs/2304.10031">Papillon et al.,
2023.</a> (adapted from <a class="reference external" href="https://arxiv.org/abs/2206.00606">Hajij et al.,
2023.</a>, work highly recommended for
those interested in knowing more details about these domains).</span><a class="headerlink" href="#fig-domains" title="Link to this image">#</a></p>
</figcaption>
</figure>
<p>Despite its recent emergence, TDL is already postulated to become a
relevant tool in many research areas and applications, from complex
physical systems and signal processing to molecular analysis or social
interactions, to name a few. However, a current limiting factor is that
most existing datasets are presently stored as point clouds or graphs,
i.e. the traditional discrete domains (<a class="reference internal" href="#fig-domains"><span class="std std-numref">Fig. 1</span></a>). While
researchers have introduced various mechanisms for extracting
higher-order elements, it remains unclear how to optimize the process
given a specific dataset and task.</p>
<p>The main purpose of this challenge is precisely to foster new research
and knowledge about effective mappings between different topological
domains and data structures, helping to expand the current scope and
impact of TDL to a much broader range of contexts and scenarios.</p>
<p><strong>Remark:</strong> This process of mapping a data structure to different
topological domains is called “topological lifting”, or just “lifting”
to abbreviate; <a class="reference internal" href="#fig-lifting"><span class="std std-numref">Fig. 2</span></a> shows some visual examples. The
“topological lifting” transfers data from the original domain where the
signal (node/edge features) exists to the new domain where new objects
can exist, such as simplicial/cell complexes. Therefore, it’s crucial to
also derive and provide descriptors for these introduced objects, and
this process is known as “feature lifting”.</p>
<figure class="align-default" id="fig-lifting">
<img alt="../_images/lifting_maps.png" src="../_images/lifting_maps.png" />
<figcaption>
<p><span class="caption-number">Fig. 2 </span><span class="caption-text">Examples of liftings: (a) A graph is lifted to a hypergraph by adding
hyperedges that connect groups of nodes. (b) A graph can be lifted to
a cellular complex by adding faces of any shape. (c) Hyperedges can
be added to a cellular complex to lift the structure to a
combinatorial complex. Figure adopted from <a class="reference external" href="https://arxiv.org/abs/2206.00606">Hajij et al.
2023.</a></span><a class="headerlink" href="#fig-lifting" title="Link to this image">#</a></p>
</figcaption>
</figure>
</section>
<section id="description-of-the-challenge">
<h1>Description of the Challenge<a class="headerlink" href="#description-of-the-challenge" title="Link to this heading">#</a></h1>
<p>We propose that participants design and implement lifting mappings
between different data structures and topological domains (point-clouds,
graphs, hypergraphs, simplicial/cell/combinatorial complexes), to bridge
the gap between TDL and all kinds of existing datasets.</p>
<p>In particular, participants can either implement already proposed
liftings from the literature (see Related References section below), or
design original approaches; both options are equally allowed. In the
case of submissions with novel liftings, we emphasize that participants
will keep all the credit for their implementations, and neither the
challenge nor its related reward outcomes will prevent them from
publishing their independent works.</p>
<p>Moreover, aligned with the primary goal of broadening the footprint and
usage of TDL, the submission of liftings from point-clouds/graphs to
higher-order topological domains is encouraged. However, this is not a
requirement: the challenge also welcomes transformations between any
other pair of topological structures (e.g., from hypergraph to
simplicial domain).</p>
<p>In order to ensure consistency and compositionality, implementations
need to be compatible with the <code class="docutils literal notranslate"><span class="pre">BaseTransform</span></code> class of
<code class="docutils literal notranslate"><span class="pre">torch_geometric</span></code>, and should leverage NetworkX/TopoNetX/ TopoEmbedX
libraries when dealing with graph/higher-order datasets. Each submission
takes the form of a Pull Request to <code class="docutils literal notranslate"><span class="pre">challenge-icml-2024</span></code> repo
containing the necessary code for implementing a lifting map. More
details are provided in subsequent sections below.</p>
<p><strong>Note:</strong> We invite participants to review this webpage regularly, as
more details might be added to answer relevant questions and doubts
raised to the organizers.</p>
</section>
<section id="reward-outcomes-1">
<h1>Reward Outcomes <a class="footnote-reference brackets" href="#id2" id="id1" role="doc-noteref"><span class="fn-bracket">[</span>1<span class="fn-bracket">]</span></a><a class="headerlink" href="#reward-outcomes-1" title="Link to this heading">#</a></h1>
<p>⭐️ Every submission respecting the submission requirements will be
included in a white paper summarizing the findings of the challenge,
published in PMLR through the <a class="reference external" href="https://gram-workshop.github.io">GRaM Workshop</a>
at ICML 2024. All participants with qualifying submissions will have
the opportunity to co-author this publication.</p>
<p>📘 Winning participants will also have the opportunity to co-author a
paper with an in-depth study on lifting procedures, focusing on
assessing different transformations across topological domains. This
work will be submitted to the Journal of Data-centric Machine
Learning Research (DMLR).</p>
<p>🏆 Winner submissions will receive special recognition at ICML 2024
<a class="reference external" href="https://gram-workshop.github.io">GRaM Workshop</a>, where the Award
Ceremony will take place.</p>
</section>
<section id="deadline">
<h1>Deadline<a class="headerlink" href="#deadline" title="Link to this heading">#</a></h1>
<p>The final submission deadline is <strong>July 12th, 2024 (AoE)</strong>. Participants
are welcome to modify their Pull Request until this time.</p>
</section>
<section id="guidelines">
<h1>Guidelines<a class="headerlink" href="#guidelines" title="Link to this heading">#</a></h1>
<p>Everyone can participate and participation is free –only principal
PyT-Team developers are excluded. It is sufficient to:</p>
<ul class="simple">
<li><p>Send a valid Pull Request (i.e. passing all tests) before the
deadline.</p></li>
<li><p>Respect Submission Requirements (see below).</p></li>
</ul>
<p>Teams are accepted, and there is no restriction on the number of team
members. An acceptable Pull Request automatically subscribes a
participant/team to the challenge.</p>
<p>We encourage participants to start submitting their Pull Request early
on, as this helps addressing potential issues with the code. Moreover,
earlier Pull Requests will be given priority consideration in the case
of multiple submissions of similar quality implementing the same
lifting.</p>
<p>A Pull Request should contain no more than one lifting. However, there
is no restriction on the number of submissions (Pull Requests) per
participant/team.</p>
</section>
<section id="submission-requirements">
<h1>Submission Requirements<a class="headerlink" href="#submission-requirements" title="Link to this heading">#</a></h1>
<p>The submission must implement a valid lifting transformation between any
pair of the following data structures: point-cloud/graph, hypergraph,
simplicial complex, cell complex, and combinatorial complex. For a
lifting to be valid, participants must implement a mapping between the
topological structures of the considered domains –<em>topology lifting</em>.
Participants may optionally implement a procedure to define the features
over the resulting topology –<em>feature lifting</em>.</p>
<p>All submitted code must comply with the challenge’s GitHub Action
workflow, successfully passing all tests, linting, and formatting (i.e.,
ruff). Moreover, to ensure consistency, we ask participants to use
TopoNetX’s classes to manage simplicial/cell/combinatorial complexes
whenever these topological domains are the target –i.e., destination– of
the lifting.</p>
<p><strong>Remark:</strong> We highly encourage the use of TopoNetX, TopoEmbedX and
NetworkX libraries.</p>
<section id="topology-lifting-required">
<h2>Topology Lifting (Required)<a class="headerlink" href="#topology-lifting-required" title="Link to this heading">#</a></h2>
<p>Submissions can implement already proposed liftings from the literature,
as well as novel approaches. In the case of original liftings, we note
that neither the challenge nor its related publications will prevent
participants from publishing their own work: they will keep all the
credit for their implementations.</p>
<p>For a lifting from a certain source domain <code class="docutils literal notranslate"><span class="pre">src</span></code> (e.g. graph) to a
topological destination <code class="docutils literal notranslate"><span class="pre">dst</span></code> (e.g. simplicial), a submission consists
of a Pull Request to the ICML Challenge repository that contains the
following files:</p>
<ol class="arabic">
<li><p><code class="docutils literal notranslate"><span class="pre">{id</span> <span class="pre">lifting}_lifting.py</span></code> (e.g. <code class="docutils literal notranslate"><span class="pre">clique_lifting.py</span></code>)</p>
<ul>
<li><p>Stored in the directory
<code class="docutils literal notranslate"><span class="pre">modules/transforms/liftings/{src}2{dst}/</span></code></p></li>
<li><div class="line-block">
<div class="line">Defines a class <code class="docutils literal notranslate"><span class="pre">{Id</span> <span class="pre">lifting}Lifting</span></code> that implements a
<code class="docutils literal notranslate"><span class="pre">lift_topology()</span></code> method that performs the specific
<code class="docutils literal notranslate"><span class="pre">{src}2{dst}</span></code> topological lifting considered (e.g.</div>
<div class="line"><code class="docutils literal notranslate"><span class="pre">SimplicialCliqueLifting</span></code> as a <code class="docutils literal notranslate"><span class="pre">graph2simplicial</span></code>
transform). It may also implement other auxiliary functions, and
can override parent methods if required.</div>
</div>
</li>
<li><div class="line-block">
<div class="line">This class must inherit from <code class="docutils literal notranslate"><span class="pre">{Src}2{Dst}Lifting</span></code> abstract
class (e.g.</div>
<div class="line"><code class="docutils literal notranslate"><span class="pre">Graph2SimplicialLifting</span></code>), which we provide for every pair
{<code class="docutils literal notranslate"><span class="pre">src</span></code>,<code class="docutils literal notranslate"><span class="pre">dst</span></code>} within the corresponding directory. When
justified, this and other abstract parent classes can be
modified.</div>
</div>
</li>
<li><p>The implemented lifting –and in general, any implemented
data/feature transformation– must be added to <code class="docutils literal notranslate"><span class="pre">TRANSFORMS</span></code>
dictionary in <code class="docutils literal notranslate"><span class="pre">data_transform.py</span></code> file, located at
<code class="docutils literal notranslate"><span class="pre">modules/transforms/</span></code> directory. The keys of ‘TRANSFORMS’
dictionary correspond to ‘transform_name’ field in corresponding
.yaml files while the values refers to corresponding class that
implements the logic of the transform.</p></li>
</ul>
<div class="line-block">
<div class="line"><strong>Note:</strong> We provide several lifting examples for
<code class="docutils literal notranslate"><span class="pre">graph2simplicial</span></code>, <code class="docutils literal notranslate"><span class="pre">graph2cell</span></code> and</div>
<div class="line"><code class="docutils literal notranslate"><span class="pre">graph2hypergraph</span></code>.</div>
</div>
</li>
<li><p><code class="docutils literal notranslate"><span class="pre">{id</span> <span class="pre">lifting}_lifting.yaml</span></code> (e.g. <code class="docutils literal notranslate"><span class="pre">clique_lifting.yaml</span></code>)</p>
<ul class="simple">
<li><p>Stored in the directory
<code class="docutils literal notranslate"><span class="pre">configs/transforms/liftings/{src}2{dst}/</span></code></p></li>
<li><p>Defines the default parameters of the implemented transform.</p></li>
</ul>
<p><strong>Note:</strong> You can find config examples for all our implemented
liftings and data transforms.</p>
</li>
<li><p><code class="docutils literal notranslate"><span class="pre">{id</span> <span class="pre">lifting}_lifting.ipynb</span></code> (e.g. <code class="docutils literal notranslate"><span class="pre">clique_lifting.py</span></code>)</p>
<ul>
<li><p>Stored in the directory <code class="docutils literal notranslate"><span class="pre">tutorials/{src}2{dst}/</span></code></p></li>
<li><p>Contains the following steps:</p>
<ol class="arabic">
<li><p>Dataset Loading</p>
<ul class="simple">
<li><p>Implements the pipeline to load a dataset from the <code class="docutils literal notranslate"><span class="pre">src</span></code>
domain. Since the challenge repository doesn’t allow storing
large files, loaders must download datasets from external
sources into the <code class="docutils literal notranslate"><span class="pre">datasets/</span></code> folder.</p></li>
<li><p>This pipeline is provided for several graph-based datasets.
For any other <code class="docutils literal notranslate"><span class="pre">src</span></code> domain, participants are allowed to
transform graph datasets into the corresponding domain
through our provided lifting mappings –or just dropping
their connectivity to get point-clouds.</p></li>
<li><p><em>(Bonus)</em> Designing a loader for a new dataset (ones that
are not already provided in the tutorials) will be
positively taken into consideration in the final evaluation.</p></li>
</ul>
</li>
<li><p>Pre-processing the Dataset</p>
<ul>
<li><p>Applies the lifting transform to the dataset.</p></li>
<li><div class="line-block">
<div class="line">Needs to be done through the <code class="docutils literal notranslate"><span class="pre">PreProcessor</span></code>, which we
provide in</div>
<div class="line"><code class="docutils literal notranslate"><span class="pre">modules/io/preprocess/preprocessor.py</span></code>.</div>
</div>
</li>
</ul>
</li>
<li><p>Running a Model over the Lifted Dataset</p>
<ul class="simple">
<li><p>Creates a Neural Network model that operates over the
<code class="docutils literal notranslate"><span class="pre">dst</span></code> domain, leveraging TopoModelX for higher order
topologies or torch_geometric for graphs.</p></li>
<li><p>Runs the model on the lifted dataset.</p></li>
</ul>
</li>
</ol>
</li>
</ul>
<p><strong>Note:</strong> Several examples are provided in <code class="docutils literal notranslate"><span class="pre">tutorials/</span></code>.</p>
</li>
<li><p><code class="docutils literal notranslate"><span class="pre">test_{id</span> <span class="pre">lifting}.py</span></code> (e.g. <code class="docutils literal notranslate"><span class="pre">test_cycle_lifting.py</span></code>)</p>
<ul class="simple">
<li><p>Stored in the directory <code class="docutils literal notranslate"><span class="pre">tests/transforms/liftings/{src}2{dst}/</span></code></p></li>
<li><p>Contains one class, <code class="docutils literal notranslate"><span class="pre">Test{Id</span> <span class="pre">lifting}</span></code>, which contains unit
tests for all of the methods contained in the
<code class="docutils literal notranslate"><span class="pre">{Id</span> <span class="pre">lifting}Lifting</span></code> class.</p></li>
<li><p>Please use pytest (not unittest).</p></li>
</ul>
<p><strong>Note:</strong> We provide several examples in the corresponding
directories.</p>
</li>
</ol>
</section>
<section id="feature-lifting-optional">
<h2>Feature Lifting (Optional)<a class="headerlink" href="#feature-lifting-optional" title="Link to this heading">#</a></h2>
<p>Some TDL models require well-defined features on higher-order structures
(e.g. 2-cells, or hyperedges); therefore, in their more general
formulation, liftings also need to produce initial features for every
topological element of the <code class="docutils literal notranslate"><span class="pre">dst</span></code> domain. In particular, in all our
examples we make use of a straightforward <code class="docutils literal notranslate"><span class="pre">SumProjection</span></code> transform to
that end, which gets the desired structural features by sequentially
projecting the original signals via incidence matrices.</p>
<p>Participants are more than welcome to implement new feature liftings
mappings, which can be added to the <code class="docutils literal notranslate"><span class="pre">feature_liftings.py</span></code> file at the
<code class="docutils literal notranslate"><span class="pre">modules/transforms/feature_liftings/</span></code> directory. However, we remark
this is optional, and it will only be regarded as a bonus.</p>
<p><strong>Note:</strong> Please, reach out if you want to know more details about how
to implement a new feature lifting and/or a novel data loader. We also
provide some data manipulations transforms that could be useful when
defining more complex data pipelines.</p>
</section>
</section>
<section id="evaluation">
<h1>Evaluation<a class="headerlink" href="#evaluation" title="Link to this heading">#</a></h1>
<section id="award-categories">
<h2>Award Categories<a class="headerlink" href="#award-categories" title="Link to this heading">#</a></h2>
<p>Given the lack of an exhaustive analysis about different types of
procedures to infer the topological structure within TDL, there is not
any particular requirement for submitted liftings –apart from a
high-quality code implementation. To promote and guide diversity in
submissions, we propose the following general, non-mutually exclusive
award categories:</p>
<ul class="simple">
<li><p>Best implementation of a existing lifting from the literature.</p></li>
<li><p>Best novel lifting design that only leverages the relational
information of the source domain (i.e. connectivity-based lifting).</p></li>
<li><p>Best novel lifting design that leverages the original features of the
source domain to infer the target topology (i.e. feature-based
lifting). If available, connectivity can also be simultaneously used.</p></li>
<li><p>Best implementation of a deterministic lifting (existing or novel).</p></li>
<li><p>Best implementation of a non-deterministic lifting (existing or
novel).</p></li>
</ul>
<p>We encourage participants to tag and categorize their Pull Requests with
these and other possible taxonomies. In fact, we might reconsider some
categories based on participants feedback and submissions. Additionally,
we reserve the right to award some honorable mentions considering some
aspects like originality, theoretical robustness, loading interesting
datasets, implementing new feature liftings, etc.</p>
</section>
<section id="evaluation-procedure">
<h2>Evaluation Procedure<a class="headerlink" href="#evaluation-procedure" title="Link to this heading">#</a></h2>
<p>The Condorcet method will be used to rank the submissions and decide on
the winners in each category. The evaluation criteria will be:</p>
<ul class="simple">
<li><p>Does the submission implement the lifting correctly? Is it reasonable
and well-defined?</p></li>
<li><p>How readable/clean is the implementation? How well does the
submission respect the submission requirements?</p></li>
<li><p>Is the submission well-written? Do the docstrings clearly explain the
methods? Are the unit tests robust?</p></li>
</ul>
<p>Note that these criteria do not reward final model performance, nor the
complexity of the method. Rather, the goal is to implement well-written
and accurate liftings that will unlock further experimental evidence and
insights in this field.</p>
<p>Selected PyT-Team maintainers and collaborators, as well as each team
whose submission(s) respect(s) the guidelines, will vote once on Google
Form to express their preference for the best submission in each
category. Note that each team gets only one vote/domain, even if there
are several participants in the team.</p>
<p>A link to a Google Form will be provided to record the votes. While the
form will ask for an email address to identify the voter, voters’
identities will remain secret–only the final ranking will be shared.</p>
</section>
</section>
<section id="questions">
<h1>Questions<a class="headerlink" href="#questions" title="Link to this heading">#</a></h1>
<p>Feel free to contact us through GitHub issues on this repository, or
through the <a class="reference external" href="https://tda-in-ml.slack.com/join/shared_invite/enQtOTIyMTIyNTYxMTM2LTA2YmQyZjVjNjgxZWYzMDUyODY5MjlhMGE3ZTI1MzE4NjI2OTY0MmUyMmQ3NGE0MTNmMzNiMTViMjM2MzE4OTc#/">Geometry and Topology in Machine Learning
slack</a>.
Alternatively, you can contact us via mail at any of these accounts:
<a class="reference external" href="mailto:guillermo.bernardez%40upc.edu">guillermo<span>.</span>bernardez<span>@</span>upc<span>.</span>edu</a>, <a class="reference external" href="mailto:lev.telyatnikov%40uniroma1.it">lev<span>.</span>telyatnikov<span>@</span>uniroma1<span>.</span>it</a>.</p>
</section>
<section id="related-references">
<span id="reference-list"></span><h1>Related References<a class="headerlink" href="#related-references" title="Link to this heading">#</a></h1>
<p>As a support to participants, in this section we share some related
references that propose topological liftings or might help defining
novel ones.</p>
<ol class="arabic simple">
<li><p>Papillon, M., Sanborn, S., Hajij, M., & Miolane, N. (2023).
Architectures of Topological Deep Learning: A Survey on Topological
Neural Networks. <em>arXiv preprint arXiv:2304.10031</em>.</p></li>
<li><p>Hajij, M., Zamzmi, G., Papamarkou, T., Miolane, N., Guzmán-Sáenz, A.,
Ramamurthy, K. N., et al. (2022). Topological deep learning: Going
beyond graph data. <em>arXiv preprint arXiv:2206.00606</em>.</p></li>
<li><p>Baccini, F., Geraci, F., & Bianconi, G. (2022). Weighted simplicial
complexes and their representation power of higher-order network data
and topology. <em>Physical Review E, 106</em>(3), 034319.</p></li>
<li><p>Barbarossa, S., & Sardellitti, S. (2020). Topological signal
processing over simplicial complexes. <em>IEEE Transactions on Signal
Processing, 68</em>, 2992–3007.</p></li>
<li><p>Battiloro, C., Spinelli, I., Telyatnikov, L., Bronstein, M.,
Scardapane, S., & Di Lorenzo, P. (2023). From latent graph to latent
topology inference: differentiable cell complex module. <em>arXiv
preprint arXiv:2305.16174</em>.</p></li>
<li><p>Benson, A. R., Gleich, D. F., & Higham, D. J. (2021). Higher-order
network analysis takes off, fueled by classical ideas and new data.
<em>arXiv preprint arXiv:2103.05031</em>.</p></li>
<li><p>Bodnar, C., Frasca, F., Otter, N., Wang, Y., Lio, P., Montufar, G.
F., & Bronstein, M. (2021). Weisfeiler and Lehman go cellular: Cw
networks. In <em>Advances in neural information processing systems</em>
(Vol. 34, pp. 2625–2640).</p></li>
<li><p>Bodnar, C., Frasca, F., Wang, Y., Otter, N., Montufar, G. F., Lio,
P., & Bronstein, M. (2021, July). Weisfeiler and Lehman go
topological: Message passing simplicial networks. In <em>International
Conference on Machine Learning</em> (pp. 1026-1037). PMLR.</p></li>
<li><p>Elshakhs, Y. S., Deliparaschos, K. M., Charalambous, T., Oliva, G., &
Zolotas, A. (2024). A Comprehensive Survey on Delaunay Triangulation:
Applications, Algorithms, and Implementations Over CPUs, GPUs, and
FPGAs. <em>IEEE Access</em>.</p></li>
<li><p>Ferri, M., Bergomi, D. M. G., & Zu, L. (2018). Simplicial complexes
from graphs towards graph persistence. <em>arXiv preprint
arXiv:1805.10716</em>.</p></li>
<li><p>Gao, Y., Zhang, Z., Lin, H., Zhao, X., Du, S., & Zou, C. (2020).
Hypergraph learning: Methods and practices. <em>IEEE Transactions on
Pattern Analysis and Machine Intelligence, 44</em>(5), 2548–2566.</p></li>
<li><p>Hajij, M., & Istvan, K. (2021). Topological deep learning:
Classification neural networks. <em>arXiv preprint arXiv:2102.08354</em>.</p></li>
<li><p>Hajij, M., Zamzmi, G., Papamarkou, T., Miolane, N., Guzmán-Sáenz, A.,
& Ramamurthy, K. N. (2022). Higher-order attention networks. <em>arXiv
preprint arXiv:2206.00606, 2</em>(3), 4.</p></li>
<li><p>Hajij, M., Zamzmi, G., Papamarkou, T., Guzman-Saenz, A., Birdal, T.,
& Schaub, M. T. (2023). Combinatorial complexes: bridging the gap
between cell complexes and hypergraphs. In <em>2023 57th Asilomar
Conference on Signals, Systems, and Computers</em> (pp. 799–803). IEEE.</p></li>
<li><p>Hoppe, J., & Schaub, M. T. (2024). Representing Edge Flows on Graphs
via Sparse Cell Complexes. In <em>Learning on Graphs Conference</em> (pp.
1-1). PMLR.</p></li>
<li><p>Jogl, F., Thiessen, M., & Gärtner, T. (2022). Reducing learning on
cell complexes to graphs. In <em>ICLR 2022 Workshop on Geometrical and
Topological Representation Learning</em>.</p></li>
<li><p>Kahle, M. (2007). The neighborhood complex of a random graph.
<em>Journal of Combinatorial Theory, Series A, 114</em>(2), 380–387.</p></li>
<li><p>Kahle, M., & others. (2014). Topology of random simplicial complexes:
a survey. <em>AMS Contemp. Math, 620</em>, 201–222.</p></li>
<li><p>Kahle, M. (2016). Random simplicial complexes. <em>arXiv preprint
arXiv:1607.07069</em>.</p></li>
<li><p>Liu, X., & Zhao, C. (2023). Eigenvector centrality in simplicial
complexes of hypergraphs. <em>Chaos: An Interdisciplinary Journal of
Nonlinear Science, 33</em>(9).</p></li>
<li><p>Lucas, M., Gallo, L., Ghavasieh, A., Battiston, F., & De Domenico, M.
(2024). Functional reducibility of higher-order networks. <em>arXiv
preprint arXiv:2404.08547</em>.</p></li>
<li><p>Ruggeri, N., Battiston, F., & De Bacco, C. (2024). Framework to
generate hypergraphs with community structure. <em>Physical Review E,
109</em>(3), 034309.</p></li>
</ol>
<aside class="footnote-list brackets">
<aside class="footnote brackets" id="id2" role="doc-footnote">
<span class="label"><span class="fn-bracket">[</span><a role="doc-backlink" href="#id1">1</a><span class="fn-bracket">]</span></span>
<p>By law, US researchers are not allowed to co-author papers with
scholars from some countries and institutions. Participants are
responsible for checking eligibility.</p>
</aside>
</aside>
</section>
</article>
<footer class="prev-next-footer">
<div class="prev-next-area">
<a class="left-prev"
href="index_tex.html"
title="previous page">
<i class="fa-solid fa-angle-left"></i>
<div class="prev-next-info">
<p class="prev-next-subtitle">previous</p>
<p class="prev-next-title">TopoEmbedX (TEX)</p>
</div>
</a>
<a class="right-next"
href="about.html"
title="next page">
<div class="prev-next-info">
<p class="prev-next-subtitle">next</p>
<p class="prev-next-title">Pyt-Team</p>
</div>
<i class="fa-solid fa-angle-right"></i>
</a>
</div>
</footer>
</div>
<div class="bd-sidebar-secondary bd-toc"><div class="sidebar-secondary-items sidebar-secondary__inner">
<div class="sidebar-secondary-item">
<div
id="pst-page-navigation-heading-2"
class="page-toc tocsection onthispage">
<i class="fa-solid fa-list"></i> On this page
</div>
<nav class="bd-toc-nav page-toc" aria-labelledby="pst-page-navigation-heading-2">
<ul class="visible nav section-nav flex-column">
<li class="toc-h1 nav-item toc-entry"><a class="reference internal nav-link" href="#">ICML Challenge 2024</a></li>
<li class="toc-h1 nav-item toc-entry"><a class="reference internal nav-link" href="#winners">Winners</a><ul class="visible nav section-nav flex-column">
<li class="toc-h2 nav-item toc-entry"><a class="reference internal nav-link" href="#st-category">🏆 1st Category</a></li>
<li class="toc-h2 nav-item toc-entry"><a class="reference internal nav-link" href="#nd-category">🏆 2nd Category</a></li>
<li class="toc-h2 nav-item toc-entry"><a class="reference internal nav-link" href="#rd-category">🏆 3rd Category</a></li>
<li class="toc-h2 nav-item toc-entry"><a class="reference internal nav-link" href="#th-category">🏆 4th Category</a></li>
<li class="toc-h2 nav-item toc-entry"><a class="reference internal nav-link" href="#honorable-mentions">🏆 Honorable Mentions</a></li>
</ul>
</li>
<li class="toc-h1 nav-item toc-entry"><a class="reference internal nav-link" href="#motivation">Motivation</a></li>
<li class="toc-h1 nav-item toc-entry"><a class="reference internal nav-link" href="#description-of-the-challenge">Description of the Challenge</a></li>
<li class="toc-h1 nav-item toc-entry"><a class="reference internal nav-link" href="#reward-outcomes-1">Reward Outcomes </a></li>
<li class="toc-h1 nav-item toc-entry"><a class="reference internal nav-link" href="#deadline">Deadline</a></li>
<li class="toc-h1 nav-item toc-entry"><a class="reference internal nav-link" href="#guidelines">Guidelines</a></li>
<li class="toc-h1 nav-item toc-entry"><a class="reference internal nav-link" href="#submission-requirements">Submission Requirements</a><ul class="visible nav section-nav flex-column">
<li class="toc-h2 nav-item toc-entry"><a class="reference internal nav-link" href="#topology-lifting-required">Topology Lifting (Required)</a></li>
<li class="toc-h2 nav-item toc-entry"><a class="reference internal nav-link" href="#feature-lifting-optional">Feature Lifting (Optional)</a></li>
</ul>
</li>
<li class="toc-h1 nav-item toc-entry"><a class="reference internal nav-link" href="#evaluation">Evaluation</a><ul class="visible nav section-nav flex-column">
<li class="toc-h2 nav-item toc-entry"><a class="reference internal nav-link" href="#award-categories">Award Categories</a></li>
<li class="toc-h2 nav-item toc-entry"><a class="reference internal nav-link" href="#evaluation-procedure">Evaluation Procedure</a></li>
</ul>
</li>
<li class="toc-h1 nav-item toc-entry"><a class="reference internal nav-link" href="#questions">Questions</a></li>
<li class="toc-h1 nav-item toc-entry"><a class="reference internal nav-link" href="#related-references">Related References</a></li>
</ul>
</nav></div>
<div class="sidebar-secondary-item">
<div class="tocsection sourcelink">
<a href="../_sources/packs/challenge.rst.txt">
<i class="fa-solid fa-file-lines"></i> Show Source
</a>
</div>
</div>
</div></div>
</div>
<footer class="bd-footer-content">
</footer>
</main>
</div>
</div>
<!-- Scripts loaded after <body> so the DOM is not blocked -->
<script src="../_static/scripts/bootstrap.js?digest=8d27b9dea8ad943066ae"></script>
<script src="../_static/scripts/pydata-sphinx-theme.js?digest=8d27b9dea8ad943066ae"></script>
<footer class="bd-footer">
<div class="bd-footer__inner bd-page-width">
<div class="footer-items__start">
<div class="footer-item">
<p class="copyright">
© Copyright 2022-2023, PyT-Team, Inc..
<br/>
</p>
</div>
<div class="footer-item">
<p class="sphinx-version">
Created using <a href="https://www.sphinx-doc.org/">Sphinx</a> 7.3.7.
<br/>
</p>
</div>
</div>
<div class="footer-items__end">
<div class="footer-item">
<p class="theme-version">
Built with the <a href="https://pydata-sphinx-theme.readthedocs.io/en/stable/index.html">PyData Sphinx Theme</a> 0.15.2.
</p></div>
</div>
</div>
</footer>
</body>
</html>