Chameleon: Adaptive Code Optimization for Expedited Deep Neural Network Compilation

Created on 2023-02-05T16:29:45-06:00

Return to the Index

This card pertains to a resource available on the internet.

This card can also be read via Gemini.

Authors credited as anonymous in the PDF but revealed as "Byung Hoon Ahn, Prannoy Pilligundla, Amir Yazdanbakhsh, Hadi Esmaeilzadeh" on the page.

Uses the time taken to execute a module as part of the objective function for a neural network.

Parameters of the code generator are optimized to produce a neural network with the fastest execution times.

Based around "knobs" in "design space" meaning parameters which can be changed for the purpose of training an algorithm.

Uses reinforcement learning / policy gradients to output a rule for each knob: increment, decrement, or stay.

Chameleon's reinforcement learning system requires about 2.8 times less searches compared to simulated annealing.