Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Investigate writing custom c++/CUDA function #7

Open
rdaly525 opened this issue Dec 17, 2017 · 0 comments
Open

Investigate writing custom c++/CUDA function #7

rdaly525 opened this issue Dec 17, 2017 · 0 comments
Assignees

Comments

@rdaly525
Copy link
Owner

Note, 'N' refers to the number of select bits of a mux (or the number of input bits of a LUT).
'K' refers to the number of LUTs in a particular layer.

layers.py:135
The function LutLayer is a single layer of LUTs. This specifies the connectivity as well as instantiating the Luts.

layers.py:113
The function LutN creates the LUT weights and the Mux.

layers.py:66
The function MuxSTriangle does the actual Mux computation. 'I' is mux input, 'S' is mux select. For a LUT, the 'I' is the weights, and the 'S' is the LUT input.

For custom code, we need to speedup either the full LutLayer (which instantiates a LutN which instantiates a MuxSTriangle) or just the MuxSTriangle.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants