It has been recently demonstrated that dynamical system models can be discovered through scientific machine learning techniques. In this data-driven process, measurements of the system are made. From this data, an optimization problem can be solved to isolate the most likely physical model (differential equations) that would deliver these physical measurements. This process is computationally challenging, both in the generation of data (which, in our case, comes from an expensive, high-fidelity numerical solver) and in solving this optimization problem. This project aims to accelerate both steps by using Julia, a high-performance language that maintains many benefits of a high-level interpreted language like Matlab or Python. In this project, the student will port existing Python code into Julia and provide time-to-solution performance statistics comparing the two. The student will also use Julia's library DiffEqFlux.jl for learning neural differential equations and Hamiltonian neural networks. The DiffEqFlux.jl library runs on both GPUs and CPUs, and the student will explore speedups that are achievable on NVIDIA's V100 and A100 GPUs.