MoE engine and non linear safety/reward in Chatolic AI architecture #5
Closed
ghost
started this conversation in
Show and tell
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi. I would be interested in a brainstorm about MoE (Mixture of Experts) LLM (Large Language Model) engine and non linear safety/reward moduls in Catholic AI.
As you know monolith LLM and linear safety/reward modul architecture punishs every high variance non linear output, wich are not moving around the forced ARMA oscillation.
This means in UX experience, a monolith LLM and/or a linear safety/reward architecture cuts every analysis, wich is not "meanstreem" according to the designers.
For example: if you asked Magistrate AI:
"What is your opinion when a famous saint was bullyed from his/her priorities in the history of the Catholic Church?" , the Magistrate AI will eliminate and smooth your question, because it's safety/reward RLHR modul activates as " a heretic attack against the system".
I did this test, and the Magistrate AI did it.
In this question I think the Magistrate AI is a digital echo chamber of the Church, but not a complex analysis tool.
What is your opinion?
Beta Was this translation helpful? Give feedback.
All reactions