A lightweight and efficient implementation of neural networks in C++.
- Layered Architecture: Supports multi-layer networks with customizable input and output sizes.
- Activation Functions: Provides ReLU and Sigmoid activations, with easy integration of additional functions.
- Weight Initialization: Offers random and Xavier initialization methods for setting initial weights.
- Optimizers: Includes Stochastic Gradient Descent (SGD) and Adam optimizers for training.
- Loss Function: Utilizes Mean Squared Error (MSE) for measuring prediction accuracy.
To incorporate Neurama into your project:
- Include the Header: Add
#include "neurama.h"
to your source file. - Create a Neural Network: Instantiate a
NeuralNetwork
object. - Add Layers: Use the
addLayer
method to define layers, specifying input size, output size, activation function, and initialization method. - Train the Network: Provide training data and call the
trainModel
function, specifying the number of epochs and learning rate. - Make Predictions: Use the
forward
method to obtain outputs for new inputs.
/*
funções de ativação:
ReLU, // 0
sigmoid, // 1
tanh, // 2
leakyReLU, // 3
elu, // 4
softplus // 5
softmax // 6 - NOVO!
funções de otimização:
SGD,
ADAM
funções de erro:
MSE, // Erro Quadrático Médio
MAE, // Erro Absoluto Médio
BinaryCrossEntropy // Entropia Cruzada Binária
*/
#include "neurama.h"
#include <vector>
#include <iostream>
int main() {
// 🚀 Inicialização da Rede Neural
NeuralNetwork nn;
nn.addLayer(2, 4, actMethod::ReLU, InitMethod::XAVIER);
nn.addLayer(4, 1, actMethod::sigmoid, InitMethod::XAVIER);
// 🔍 Dados de treinamento para o problema XOR
std::vector<std::vector<double>> trainInputs = { {0, 0}, {0, 1}, {1, 0}, {1, 1} };
std::vector<std::vector<double>> trainTargets = { {0}, {1}, {1}, {0} };
// 💡 Exemplo de entrada para visualização do treinamento
std::vector<double> sampleInput = {0, 1};
int ephocs = 100;
float learning_rate = 0.01
// ⚙️ Treinamento da rede
std::cout << "Iniciando treinamento... 🔥" << std::endl;
nn.trainModel(trainInputs, trainTargets, ephocs, learning_rate, sampleInput, OptimizerType::ADAM);
std::cout << "Treinamento concluído! ✅" << std::endl;
// 🧪 Testando a rede com os dados de treinamento
std::cout << "\nResultados dos testes:" << std::endl;
nn.testModel(trainInputs, trainTargets);
return 0;
}
For detailed information on Neurama's classes, functions, and customization options, refer to the Neurama Documentation.
Contributions are welcome! To contribute:
- Fork the repository.
- Create a new branch for your feature or fix.
- Commit your changes.
- Submit a pull request detailing your modifications.
Neurama is licensed under the mozilla license. See the LICENSE file for more information.
Neurama draws inspiration from various open-source neural network libraries, including:
- MiniDNN: A header-only C++ library for deep neural networks.
- OpenNN: An open-source neural networks library for machine learning.
- FANN: Fast Artificial Neural Network Library.
- tensorflow: The most powerful framework today.
These projects have influenced Neurama's design and functionality.