Gabriel Jacob Perin

Undergraduate Thesis - MAC0499 - IME/USP

Introduction

Geometric Deep Learning (GDL) is a subfield of machine learning focused on designing neural network architectures that respect the inherent symmetries and invariances of the data. Traditionally, the development of neural network architectures has leaned more toward an empirical craft, driven by trial and error. GDL aims to provide a mathematical framework that connects architectural design to the underlying mathematical structures of the data domain, guided by inductive biases embedded in the architectures.

Objective

Primary Resources


Project Plan

Monograph – Theoretical Component

1. Introduction (May - June)

2. Preliminary Concepts (July - August)

3. Domains of GDL (September - October)

4. GDL Models (November - December)

Explore well-known architectures within the GDL framework. For each model, describe its structure, explain how it incorporates geometric priors, and demonstrate how it fits into the broader GDL paradigm.

Include practical examples and applications where these models are used.


Experimental Component (Optional)

Select a real-world problem or domain and design experiments that showcase how GDL principles can be applied. Potential areas include: