-
-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
obtain the expanded schema without importing it to fauna #1
Comments
If we could programatically expand the schemas ourselves, we would not need to upload the schema to fauna. As it looks like that fauna is running a loosely compliant version of openCRUD, I believe that we could run a script that would make any initial schema expand to add the openCRUD components. That would give us pretty much the same result as uploading the schema to fauna (we could then customize it to be fully compliant). Sadly, building such script would a big project on itself. I managed to find only 2 packages that are doing that already, but they have a lot of added flavours on top: They seem to achieve what we need, but they are wrapped in lots of extra complexities that serve the specific needs of their frameworks and simplifying them would be as hard as starting from scratch. Therefore, we are still sadly dependent on fauna's team goodwill to help us moving forward. @erickpintor @lregnier @n400 is there any chance you could advocate inside fauna to open source the part of the code that handles the schema expansion? it's virtually impossible to build any extension like faugra having to rely on unaccessible parts |
Hi @zvictor, Leo and Erick are pretty swamped right now with the upcoming release. I've added this to our roadmap and will discuss it with the Product and Engineering teams to see how feasible it is (being tracked internally as ROAD-245). Feel free to email or DM me on Slack for an update. Sorry that the project has been blocked :/ |
Hi @n400! It looks like you are not working with Fauna anymore, which is very sad to hear 🥹 |
Hi @zvictor! It’s good to hear from you. I would try reaching out to @rts-rob, Head of DA. He’s passionate about the OSS community and started Fauna Labs . |
Thanks @n400 ! Hi @zvictor - looping in @Shadid12 who has done some related work in tooling, all available in Fauna Labs. You can also find us in our forums and Discord server! |
To be very clear and make it easier for the Fauna team to help us here, I would like to present 2 proposals of solutions. A) Publishing and Maintaining a PackageIt would be great to have the Fauna code open sourced, for obvious reasons, but that is something out of our reach of influence apparently. However, that does not mean that relevant pieces of code could not be open sourced as separated packages, right? Somewhere inside Fauna the schema we upload gets expanded to include the basic CRUD methods, either while it's still just a GraphQL schema or somewhere else when it's already being processed and being converted into Collections and so on. Regardless of when/where that happens, we need to split parts of it out and publish them as independent packages. Libraries like fauGRa and it's sort can only be built if we have an environment that is predictable and reliable. Otherwise, "magic features" like the creation of the basic CRUD methods become unreachable, breaking the composability principle. Having a package maintained by Fauna that we can call from our code in order to expand the schema ourselves would be great! B) Adding a New Import ModePOST /import?mode=dry-run A new mode that would take the schema, validate it, and return the expanded/final schema from Fauna. Needless to say, that's a terrible solution 🥲 The dry-run endpoint/mode must be publicly accessible (i.e. with no need to present a secret key). Otherwise, users wouldn't feel comfortable delegating the secret keys of (possibly) production databases to tools that are supposed to run things in dry-mode. Plus, it would add extra work in the CI/CD and whatever other environments that would need to keep an extra key just for dry-run operations. |
Some reflections on the proposals:
|
Hi @rts-rob and @Shadid12 ! Did you have the chance to check the situation of ROAD-245 and maybe check the other comments in this thread as well? I have been working in this framework for the last 2y and I am very happy with what I have accomplished with it so far. I am also confident that many will love it and that the Fauna community will benefit directly if this ever becomes popular. I never dared to actually launch/promote this framework anywhere because the UX of it was not quite there yet, but I would like to finally get it to the next level soon: I plan on renaming and rebranding the whole project, working on #10, and then making a decent launch. All of it has been waiting since April 2020 on this one issue right here. So I really hope I can hear back from you on what we can do to move forward together. 🤘 |
I think the realistic thing would be to extract the faunadb server from their docker image, decompile and write a tool to replicate the functionality. The main issue I see with this approach is that we have to manage this, instead of fauna. don’t think there will be too much support on this from fauna directly edit: Running faugra through FaunaDev docker with the config set to bypass the cache(60 sec timeout) it works the same as in fauna with proper validation. As for fixing 2 and 3, Since the config by default to the db is |
Thank you for the deep investigation and for sharing your thoughts with us, Daniel!
Speaking from experience, we can't expect any level of support/collaboration from them on any direction we go. We are by our own, and waiting for things to improve on their side has proved to be a mistake in numerous occasions.
Likely yes. Plus, we would need to keep track of changes in a complex system in another language (Java?) to them port it to our system whenever changes. It sounds very manual and prone to errors, unfortunately 🥹
It's a great solution once docker is available. The problems is that our ultimate goal is to run |
Here is something promising. I would be happy to build that, but as a public service instead of through docker. My proposal for an independent endpoint:
Cons:
Pros
Questions
|
I have just implemented in
How we will finance and keep this service up I honestly have no idea. 🤷♀️ |
TLDR for the @fauna team: Compiled suggestions on how to improve Fauna and fix this issue can be found here.
Currently, in order to generate TS types (
faugra generate-types
andfaugra build-sdk
), faugra makes some compromises that won't be acceptable to the general audience.The problem
Given a schema file, faugra uploads its content to fauna in order to:
have any missing "base" schema appended to it (it adds e.g.scalar Date
anddirective @resolve
)--> primitive values are being hardcoded into base.gql instead.findAll<Type>ByID
)Without uploading the schema to the cloud the TS types would be incomplete , lacking the content that fauna adds to it.
Current solution
Putting all together, in order to generate the TS types, faugra needs to:
As modularisation is a core principle of faugra we need to repeat this process for each file individually(--> We had to give up on that because of this issue). But, if we do not reset the database before pushing the new schema in, fauna will merge the content of the files. The last schema uploaded will in practice extend the content of all schema files pushed before. Therefore, importing the schema in override mode is a must.Considerations
Considering the performance and side effects of steps $2 and $3 I believe that I can't have the TS types being generated "on save", as I initially planned. And, after all considered, I wonder if anyone would actually bother going through the hassle of setting up such tool that requires credentials and mess with your data.
So, we need to find a way to kill steps $2 and $3: we need to programmatically add the missing content in the basic schema instead of publishing it to the cloud.
Where do we start? 🙃
The text was updated successfully, but these errors were encountered: