-
Notifications
You must be signed in to change notification settings - Fork 5
Launch Plan: Max AI #306
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Thanks for starting this Joe - better early than late! I suppose if Max AI ends up being a pay product (not available to all our users), it'd be similar to Teams, with a fixed monthly price? Pay $X per month and have Max AI at your hands to help you with everything at PostHog. It could be enabled as an add-on or even as its own product. But I think that's the way to do it (I can't imagine a Max AI pricing where you pay per usage...) |
I've heard a mix of things on that front, so will defer to @Twixes on what pricing may look like (if anything). |
Chatted with Michael and think a great feature here would be working Max into all our tutorials with a suggested prompt. I've added this to the list above. Thread: https://posthog.slack.com/archives/C01FHN8DNN6/p1743678702390879 |
Hey guys, sorry to throw a bit of a dampener on this but I think we might need to potentially pump some of the breaks on here as our lawyers have given us some advice to make sure we have from the LLM providers answered explicitly, so we can accordingly update our DPA and our wording. The questions they want us to ask our @PostHog/team-max-ai and/or our LLM providers are below. Maybe @PostHog/team-max-ai you guys know the answers to some of these.
I will contact the LLM providers next week when we have any answers we can get internally and hopefully then by end of week we can get these back to the lawyers to update the DPA & potentially the terms. Our chat with our lawyers was a bit more eye-opening than I expected but the good news is we are learning this stuff before general release and not after! |
I don't know all the answers, but here's what I can offer @fraserhopper: ad 1: The terms are all standard API (business) use terms made public by OpenAI / Anthropic / Google Gemini. Hope this is helpful. Would like to prevent legal wrangling from blocking rollout |
Yeah we will be contacting the LLM providers this week and hopefully we get the answers back. I hope this doesn't delay anything but it would be much better to delay a couple of days and be in full understanding of how the data is getting handled than to be mishandling customer's end user data. |
Data doesn't get mixed between customers – specifically LLM generations always use data of only one specific project. Definitely agree with handling the data in kosher ways. :) |
@Twixes I assume Cohere is the same here? |
Right so if theoretically we were going to do this "fine-tuning" what would we do and how would we do it - would this be using our customer's end-user data, if so this would require a pretty extensive re-write of our DPA, so would be good to understand this now. |
@Twixes you answered yes here but you offered no description, can you please describe... Same thing for
|
@Twixes Just flagging we need answers to go back to our lawyers with, I can tell them we have some time pressure here but would be good to get answers back to them about the details with Q5-7. (sorry to push, just really don't want to hold up release here if we can avoid) |
Ofc, answering @fraserhopper
Yup.
Fine-tuning is feeding the same data to an external model that we're now passing in input prompts. The only difference is that the result is a fine-tuned version of that model instead of an output prompt (a fine-tuned version of a model is then theoretically better than non-tuned at outputting prompts for a specific task) – but the scope of data used isn't any different from regular usage.
I mean, this AI system can't interact with the outside world and doesn't decide on any sensitive topics like credit applications – it has the same human oversight and compliance protocols as the rest of PostHog systems, for example automated tests. I don't know what else is possibly needed for this question.
I replied "We're rolling out automated evaluations of AI output to monitor for such issues." – is this insufficient, can you specify? |
Thanks @Twixes |
MaxAI isn't super close to launch to my knowledge, and we don't yet have a pricing RFC. Target launch date is still just 'Late Q2'.
However, it's important to get this launch right and so I'm getting ahead with a rough launch plan now that we can continue to add to.
MaxAI is an interest product because the value prop is going to be around accelerating existing workflows and tying to a zeigeist. It isn't something where there will be a tangible business outcome that we can easily quanitfy, or where there'll be comparable tools we can replace.
Instead, it'll be a case of capturing the personal excitement and enthusiasm of users, and reflecting that into a broad awareness. Social media is huge for popularizing in this way, and for AI in particular - so we should explore how to lean into that.
Best practices
<MaxCTA />
component) - @joethreepwoodFor onboarding flows I'll make a special step to add this to emails for non-technical users especially, as they'll be clearly able to benefit.
Template-wise, I imagine we should include a template description which teams can use to give Max AI context.
Customer story-wise, I'm interested to explore if there are influencer opportunities here to tie into the social angle.
Launch plan
Before launch - Deadline: 19 May
Launch day - Deadline: 22 May
A few tweaks to the usual here. I think we can invite older users who we'd normally exclude but who may have churned due to difficulty with the platform. I think we can also arrange a repeating social activity and tie-in with some other social efforts we have going on.
<MaxCTA />
component to no longer reference the beta - @joethreepwoodFollow-on activity
Some big ideas
Max AI AMA
We could let Max AI takeover our Twitter for an AMA. We'd use him on a demo Hogflix account, let people ask question via Twitter, run it in, then surface the results back to them as replies. All we'd need for this is some demo data and engagement.
Unique Max prompt
A second idea would be to give Max a unique capability where users can prompt him and he'll create an image similar to the changelog frames for the user. They can then share these with us on social, or use it to share news at their org.
The text was updated successfully, but these errors were encountered: