Replies: 18 comments 32 replies
-
Hello! Thank you so much for taking the time to gather all of your thoughts so that we can answer them in one place. Since there is a lot to our response, we'll be taking 2-3 days to respond in kind so that we cover everything and provide the appropriate level of detail. Thank you for your patience. For others following the thread: please feel free to add your questions and such here while we're drafting our response to the above. Depending on the context we might add it in to the same answer or get to it after we've completed the current answer. |
Beta Was this translation helpful? Give feedback.
-
On blocklists in general
I'd like to echo that. Reading https://nivenly.org/docs/papers/fsep/ I found it impossible to tell if this is a collection of ideas that some well-meaning people have brainstormed together and hope will work, or the result of a serious consideration of the tradeoffs and experiences of previous efforts, which were then used to synthesise the list of requirements. E.g., the document mentions "Block Together" in passing, but there's no indication that Jacob (the BT creator) has been approached / interviewed to see what lessons he has after running BT for ~ 6 years. Even a cite of https://www.rfc-editor.org/rfc/rfc6471 would be something (I keep seeing variants of sections 2 and 3 in particular come in these sorts of discussions). Note The experience of people in organisations like IFTAS, listed as contributors, as invaluable, but names aren't a substitute for links to relevant research carried out or cited by these people. "moderation"I was struck, when reading FSEP, that it makes a weird pivot early on. The first section talks about "moderation". It doesn't define the term, but I take that to mean the actions of server admins responding to reports about the users on their server. The "Context and Summary" section ends (emphasis mine):
But then, about halfway through the "Challenge" section the language shifts; it's no longer about moderating users it's about blocking servers. I appreciate there is a connection between the two -- blocking is the stick you threaten moderators of other servers with if you think they aren't doing a good enough job -- but blocking and moderation are not the same thing. At least, I don't think they are, and the doc doesn't make a strong case that they are. It absolutely can be one of the tools that can be used to help moderators. But there are other tools, not all of them technological, that could help too, especially documentation and training that answers questions like:
etc. If the document was more explicit that moderation is a complicated topic, there's no single technological silver bullet, and this is just one tool in part of a much larger toolbox I wouldn't have these concerns. But that's not the vibe I got reading the document. And I don't see a bright line connecting the technology proposed in the doc and "a solution ... that will normalize moderation standards" that the summary calls for. I mentioned IFTAS earlier. Looking at https://about.iftas.org/ and the list of workgroups at the bottom; if they had a "Federated blocklists" (or similar) effort I think the FSEP paper would fit right in; it would be clear that this part of a broader, multi-pronged effort. I'm not saying that FSEP/TBS has to be done under the IFTAS flag. But if FSEP opened by making it much clearer that that broader effort exists, and FSEP is not the solution in and of itself, that vibe I mentioned earlier would diminish. I hope that's helpful feedback. |
Beta Was this translation helpful? Give feedback.
-
Thank you to the proposal team for putting this forward and dclements for starting this discussion. Suggestion: ability to auto-update without breaking exisiting follow relationshipsFSEP 3.1 talks about showing admins the impact of a denylist on their users, but it is unclear how this would apply in an out of band auto-update. As dclements mentions in the section on Current Processes, the current end user experience of defederation of an existing follow is not easily reversible. I suggest, when giving admins the ability to decide whether to auto-update there should be an option for "auto-update as long as it does not break any follow relationships". This could be done in a couple of ways
Question: Future Integration of Multiple DenylistsFSEP 4.2 states "Users should be able to add up to 100 default deny lists" (emphasis added). Is the intention here that more denylists will be added to the platform UI and admins will be able to choose between these select few, and/or will admins be able to subscribe to arbitrary denylists, say by inputting a url of a denylist project that implements the finalized api? IMO the ability to subscribe to denylists put forward by arbitrary sources is an important eventual goal Question: Following UI Integration(on Mastodon at least) A server-level block will already prevent someone from the blocked server from following a user, and a limit will cause the limited server user to show up as a follow request instead of a follow if someone has open requests. What are the additional benefits of calling to the external denylist again here? Is the intention that a user could choose to configure their own external denylists that they could reference regardless of whether their admin subscribes to that denylist? |
Beta Was this translation helpful? Give feedback.
-
One question I've been formulating around this involves the how in a little more depth. The Bad Space is proposed as a first source, but it isn't clear to me why this interface couldn't work on top of basically any static site, especially for the purposes a Minimum Viable Product? There's no feature in the list here that wouldn't seem to work with loading directly from, say, Oliphant's github. It could even configure something like FediBlockHole in the background and build its interface on top of that. There may be other ideas or features coming for The Bad Space, but it seems like a general approach would fit neatly with the way that the fediverse works. So why start by loading from this source? |
Beta Was this translation helpful? Give feedback.
-
I dumped this at the top of the google doc, and got asked to copy and paste it over here. Please pardon me not reading through the other replies to see if anyone's said similar things, it's a busy weekend. Thank you for all the effort you put into this “FESP”. I would like to respectfully suggest that this proposal is missing some major elements of the procedure that I apply in my regular activities as the administrator of a small Mastodon instance, which has been running since 2017. This existing proposal does a decent job of outlining a sensible workflow for automatically applying blocklists, but has absolutely nothing related to how much I trust various blocklists, which I feel is a critical part of any automated or semi-automated solution. Let me explain. I trust certain people to make good decisions about moderation. There are some admins who have a long history of always making decisions that are well-documented; if I see them make a post to #fediblock then I will probably just say “hey, thanks for your work, cool admin”, and go block whatever sites they suggested. I trust some people to probably be right, but will verify before acting. They're out there, posting regularly, and I usually agree with them but always look into their suggestions for myself. Sometimes they make suggestions that I just don't think are worth bothering with, sometimes I may think they're missing the mark entirely. I trust other people to make bad decisions about moderation. There are some admins who have a history of getting into arguments and calling for a #fediblock on the entire site the person they’re arguing with. There are admins who urge blocking any site that does not block other sites with sufficient speed. I trust some people to make actively terrible decisions about moderation. There are some admins whose posts to #fediblock are a clear and obvious signal that they are the sorts of people I would block - which I usually do - and that anyone they suggest blocking is probably someone I want to avoid blocking. This proposal has no concept of the relative trustworthiness of its blocklist sources, and in fact recommends the default acceptance of a blocklist source that combines several sources I trust to make bad decisions. What is needed is a way for an admin to rank their trust in each and every blocklist source, and define thresholds for being alerted about potential new block-worthy sites, and for sites being automatically blocked. A simple way to do this would be for the user to assign a numerical ranking to each blocklist source. List A, run by my best buddy of thirty years, who I trust implicitly? They get, oh, 10. Lists B1, B2, and B3, who share my general values? They all get a 3. Lists D1, D2, and D3, run by people who I think are are sometimes right but prone to making impulsive decisions? They get a 1. Lists F1, F2, and F3, who are constantly making minor dumb suggestions? -1. List Z, run by people who want me and everyone like me to not exist? -10. And then I would choose where my thresholds are. With the above example sources, I might choose a collective trust of 9 for “block this automatically”, and a collective trust of 3 for “suggest this block”. If A suggests any block, then I just apply it. Same with B1/2/3 all agreeing, or B1/3 agreeing and all of D1/2/3 concurring. And if some B/D-tier sources suggest it, then I would get this new possibly-blockworthy place shown to me. Some small further extensions, perhaps: I might want to always have any block suggested by B2 or D1/2/3 held for manual review, even if my total trust in this suggestion is well over my threshold for auto-block. And the case of A and Z suggesting the same site, for a total score of 0, is an interesting one - perhaps positive and negative scores should be summed up separately, and anything with a positive score that crosses the “suggest block/autoblock” thresholds should be presented as a suggestion, with a note that this one might be an interesting one that I’ll want to deal with when I have time to sit down and ponder nuance. Ideally this would also be accompanied by an option for my Fediverse server to publish its blocklist, so that admins who think I am a wise and wonderful dispenser of even-handed, thoughtful justice can apply my every suggestion, and so that admins who think I am a total misguided fool can use me as a signal for what not to block. And I can do the same with regards to other admins, creating Anyway. There you go. The Fediverse is decentralized and any suggestion for sharing blocklists should also be decentralized, and should recognize that the people who compile these lists are humans, who I may have varying degrees of trust in. As written this proposal feels inherently centralized; it’s trying to replicate workarounds for the incredibly centralized corporate-owned social networks of the past decades, without any thought given to the inherently anarchic nature of federation. It’s also suggesting the enshrining of a centralized blocklist put together from multiple people with a track record of admin decisions that is spotty, at best. Experience has taught me that if something is not in an initial spec like this, it has a huge chance of never getting implemented, because people move on to other things. So implementing this as it's currently written would just enshrine badspace as the default blocklist for every fedi server that adopts this, and I think that's going to make things actively worse. Also, posting a version of this to a couple relevant Masto tags got a reply from @[email protected] pointing me to their "Fediseer" project, which appears to be aiming at implementing this sort of mechanism. Perhaps you should chat with them and compare notes, if you haven't already met them, maybe even combine your efforts. I think it's definitely getting to be time to move on from manually sharing block suggestions in a hashtag. |
Beta Was this translation helpful? Give feedback.
-
It should be impossible, enforced in the code, for an automated block action to sever a relationship where one of your own users is following someone on the instance to be blocked. This will safeguard against the threat and allow implementing existing best practice that you give users a heads up period to move before severing their relationships. When automatic blocking is blocked by such a rule, there are several possible manual process outcomes, at least:
In the vast majority of cases, however, nobody on your instance is following anyone on new junk instances, and the automated process works without severing your users' follows. Now, there's also the opposite direction follow relationship which you can't automatically trust, but would probably also want to avoid severing. I think you can probably have the tooling look for follows of your users from users on the candidate for blocking, and, if any of them have had interaction back from your users (rather than just being lurking followers), flag the block as needing manual review. |
Beta Was this translation helpful? Give feedback.
-
I appreciate the call-out in the OP, as well as the thread running through several other comments, pointing to the centralization inherent in privileging any given blocklist (or meta-blocklist) above other implementers of the protocol. Privileging a given blocklist or meta-blocklist creates a position of power for the individuals or institutions administering it. This creates an attractor for bad actors. All through the history of the internet, there are cases in which mod power has been misused for personal vendettas. All through the history of the internet, this has made people with personal vendettas want to become mods, so they could misuse power. This is human and inevitable. And the greater the power vested in a given mod position, the bigger the attraction for this type of person. Even if everyone initially involved in a privileged blocklist project can be trusted to operate in good faith, with perfect judgement, in a manner respected by all, etc -- time passes. The original mods will move on. And... what then? No matter how good the governance structure of any given blocklist project, all blocklist projects are vulnerable to social engineering on this front. Decentralizing limits the power of any given project. In doing so, it mitigates this vulnerability directly, by containing damage from bad actors. It also limits it indirectly, by making blocklist projects less attractive to bad actors in the first place. |
Beta Was this translation helpful? Give feedback.
-
Sequencing and MVPI would like to explicitly suggest that this project set its MVP to be that platforms (eg. Mastodon) should provide a denylist api and allow admins to configure 1 external denylist, while working to make The Bad Space a successful implementer of that api. Many people in this thread have brought up good questions and concerns such as
These questions require a lot of discussion, but they don't apply to an interface that allows an admin to link a denylist of their choice vetted to their level of comfort. Such an interface is a good step on its own while allowing more time and care to discuss a potential followup of recommending specific denylists in a platform UI. Allowing more than 1 denylist to be configured by admins can be another followup, unrelated to whether any specific denylists are recommended. |
Beta Was this translation helpful? Give feedback.
-
One thing I do not see mentioned in the proposal is that there are different degrees of limiting interaction between servers. The outright "block" is the most extreme, but it's also possible to "silence" or otherwise limit visibility of noisy instances. Good examples of instances people would consider silencing but not blocking would be e.g. botsin.space (the Fun but Spammy Bots instance) or switter.at (requiescat in pace). The proposal describes only a method for wholesale domain blocking, which forces these edge case instances into one or the other depending on the admin preference. "Silence" would open up a potential in-between category to be placed in. Also, a block is either "never" or "forever" - would be great to see timeboxed options for a temporary mute that automatically reconnects after some time has passed. Good examples for this would be e.g. when mastodon.social was overrun by crypto spammers before they added hCaptcha support - it would be nice to silence for a week to get their mod issues sorted out, and then automatically re-enable normal federation after some time has passed. |
Beta Was this translation helpful? Give feedback.
-
One more bookkeeping question: Is this the correct venue for this conversation? What does the process from here look like? There are several elements here that would seem to good fits for a FEP, others that require UI work from mastodon specifically but that could be extended beyond mastodon trivially. So is the plan here to build this up with nivenly and then bring in Mastodon? Is it to then generate a FEP, or several? What is the roadmap here? |
Beta Was this translation helpful? Give feedback.
-
Thank you again for posting all of your questions and analysis! We’re going to answer each section as we go through, keeping the same section headers as often as possible. Note that the section headers may not be in the same order. Questions that may have been answered by the Important clarifications section will not be re-answered, but please do let us know if the answers need to be clarified. We also realize that the discussion took off while we were drafting this response: awesome! And thank you. Due to length, we've started with the Q&A in this document and are going to 1 ) answer the remaining questions in our next post and 2 ) going to go through the thread and clarify any other questions. Important clarifications at the topThere are some important clarifications that are needed to help guide the answers we’ll be providing below - not just to @dclements , but some of the other patterns we noticed while drafting this response. What is FSEP?
It’s important to understand that the very broad goal of FSEP is to improve the moderation experience of federating tools. Since this scope is very, very broad the problem can only be solved in stages by choosing one issue at a time and giving each issue the attention to detail it needs. This is also why the FSEP document is structured the way it is. The broad context is that FSEP seeks to be a toolset to improve the moderation experience of federating tools. The first problem that was chosen to work on in this space was blocklist management. On v1.0 of the doc blocklist management includes the ability to use one or more blocklists, seeing why a site is being moderated (limit, suspend, etc.), allowing instances to override a site’s moderation status (either applying it or removing it), and so forth. Reminder about current stage for FSEPThe FSEP document is currently a requirements gathering document, similar in format to what is called Product Requirements Document (or PRD). That means that questions around features, scope, and intended functionality are all great at this stage, but questions around specific implementation detail will need to wait until the next stage once the initial minimally viable PRD is complete (which it is not, yet). While going through the questions that were posed we will do our best to answer, but if the question is too deep in implementation detail it might not be answerable yet as that will be in the next phase once the requirements are complete. Separating concepts: FSEP is / will be a management tool and The Bad Space is a specific blocklistThere are several questions about FSEP and The Bad Space. We understand there is some confusion here, because Ro is the author of version 1.0 of the FSEP PRD and also runs The Bad Space. (Clarification: while Ro did collaborate with others while drafting FSEP v1.0, he is the primary author.) That said: FSEP and The Bad Space are distinct entities. As the author of FSEP v1.0 and maintainer of The Bad Space, it is easy to see how he’d use project(s) he had experience with to influence writing the doc. There are examples such as this one for feature 3.1:
This does not mean that this feature is limited to The Bad Space as a specific blocklist. The Bad Space is only used here to provide a specific example of what a feature like this might look like. Questions around “what about ${named blocklist}? Will FSEP support it? (and/or) Why doesn’t it?”The goal of building a blocklist management tool is to support as many blocklists as possible. To do that, we need the involvement of moderators and maintainers of existing blocklists. At a minimum, we need the moderators and maintainers of those managed blocklists to look at the FSEP PRD and tell us if the feature set being proposed works for what they’re creating, and if not what features need to be added or altered to accommodate. We were starting to do outreach to this end when our founder unexpectedly passed a matter of days after the FSEP PRD announcement. Basically: if you are a moderator or maintainer of a community managed blocklist, our goal is to work with all of your blocklists. We need you to help us ensure that’s the case so we can have the necessary feature requirements in the FSEP PRD to be implemented. (We acknowledge that some of this is happening in this thread - this is exactly what is needed to ensure that FSEP's blocklist management meets as many needs as possible.) I have questions about The Bad Space, where do I ask them?Ro is the maintainer of The Bad Space. While there might be some answers that we can provide based on our awareness of the project, Ro is the one best suited to answer questions about The Bad Space. Typically Ro can be reached at @[email protected]; however, Ro’s sites and work have been subject to several DDoS attacks and so forth for a week and are ongoing. Ro will be reachable again once his sites / contact is back online. Ro has asked people to hold some of their questions while he focuses on getting everything back online (especially as some of the questions will answer themselves once the underlying information is once again available). Similarly: The Bad Space’s source code is located on Ro’s personal git repository (note: not GitHub) that is currently offline due to the ongoing attacks. Once it is back online, you can view the source code, licensing, etc. for The Bad Space here. Succinct summary of The Bad Space’s blocklist data: The blocklist data being built by The Bad Space is consensus based and requires a consensus of two or more for a site to be recommended to be blocked. There was a recent issue where the consensus was broken and thus any site that was on any blocklist for any reason was put into The Bad Space. This issue was resolved both in the code as well as the sources that provided that data as part of the consensus. Any additional information about this, including relevantly the sources / data stored in Ro’s git repository, will need to wait to be provided until Ro, and his work, are able to be back up and online. I want to participate, but I’m not familiar with GitHub. Where can I submit my thoughts and feedback?We’ve started trying to drive conversation traffic to our Discourse to assist with this and started a thread for FSEP here. Please do not feel limited to that thread and feel free to create your own if it is helpful! This is only to get started. If you have feedback and features that adding as a comment to the Google Doc isn’t a good fit, and GitHub isn’t in your comfort, please use the Discourse. I’m the maintainer of a blocklist or community managed blocklist, and I would like FSEP’s blocklist management to support my blocklist. What do I do?Please review the FSEP PRD here. If you see features that need to be added, removed, or otherwise adjusted please either contribute to the conversation here or on Nivenly’s Discourse if that is more comfortable than GitHub. As a reminder: the current FSEP paper is outlining features that will become actual code. So by example, one type of feedback that is needed in the current phase is “FSEP’s blocklist management needs to support displaying information about how a specific blocklist is built and maintained”. In the next phase will be implementation, which will work on determining how to make that a reality. What the heck is “Tier 0”????When those who moderate or maintain instances on the Fediverse refer to Tier 0, or T0, they are usually referring to what is generically referred to as “the worst of the worst”. Since this is a term that is used somewhat broadly, and in most cases colloquially, that means that the extreme cases are mostly understood and the “grey-er” areas, and what starts to approach a “grey area”, and so on becomes less clear and gets into where, when, and why Fediverse mods/maintainers get into very heated debates about what “clearly is” and “not so clearly is” the “worst of the worst”. All that said, you can, in general, expect extremist views, fascism, gore against persons for any intentional reason, hate crimes, CSAM, and so forth to be in the “understood as T0” and what a Fediverse moderator or maintainer, or a paper like FSEP, is referring to when using the term. Noting that T0 is used, but not defined in the document, we will definitely make sure to have a pull request with this term (and any others that come out of this discussion thread) that need to be defined for the document defined in the document. Noting also that as FSEP is currently focusing on building blocklist management capabilities, but not creating a centralized source of one opinion, it is the individual blocklists that will define what T0 means for that blocklist and not the FSEP project. Answering the QuestionsAvailability - what happens if a subscribed blocklist is offline?Short answer: nothing as the blocklist data is stored in your instance’s blocklist. If the blocklist is dynamic it simply won’t update until it’s back online. Long(er) answer, via example: let’s say you are using one of Oliphant’s blocklists. Currently, you would need to manually upload the blocklist CSV to your instance, either on a maintenance schedule or when you notice that there’s an update to the blocklist. If the repo hosting Oliphant’s blocklist goes down it doesn’t impact your own blocklist in any way - the data is already there. You simply cannot download the new blocklist until the blocklist site is back online. For a dynamic / subscribed blocklist the situation would be similar, it would just be the tool (FSEP) instead of you that cannot download / upload until the remote blocklist is back online. Your local data would stay the same. Implementation detail: once FSEP is in the implementation detail phase there can be Q&A here about what type of data should show in the interface once a dynamic / subscribed blocklist has been unable to sync / update for a period of time as well as decisions for how long a “period of time” before error / similar messages start to display in the UI. ReceiptsReceipts are something that are discussed often. Not only in the context of FSEP and blocklists, but also fediverse moderation in general, including a team’s internal documentation and processes. The complexity here comes from a few places:
That said, different blocklists might have different relationships to the above situations. The answers above are in general since there is not a single, centralized blocklist even if an instance uses FSEP. This isn’t to say that there are never situations where there should be evidence shown about why a blocklist is recommending a certain level of moderation for an instance. It does mean that there is thought and nuance that needs to go into figuring out how to handle the extreme cases, like the ones above and any others that someone might want to contribute. Leaning into the “eventually”: since FSEP is still in the information / requirements gathering phase, we believe that it would be an excellent use of discussion time to discuss this. This can either be here or in another thread, depending on which is least confusing for others to participate in. Compatibility - what is compatibleCopying in from the original for visibility:
There is a lot here that falls into the implementation detail phase, for example # 4 which asks about the ActivityPub specification. That said, we will try to answer: There are two main differences we can see between FSEP and the example provided with FediBlackHole:
For the first one, FediBlackHole does solve a part of the problem: it seeks to be a tool that can unify blocklists. That said, at least as far as we can see, the tool doesn’t provide any additional data about the resulting merge. It’s not possible to see which sites came from one or more of the merged blocklists, or to see the user impact of a site on the blocklist (how many networked connections will be disrupted). Amongst other things, these are features that are planned to be in FSEP’s blocklist management. The second one deals with FSEP’s longer term goal of improving the moderation experience for federating tools. Blocklist management is the first problem that FSEP is seeking to address with Fediverse moderation tooling, but it’s not the last. How the project will handle deciding which issues come next and how it will solve them will be up to the project as it matures. Current ProcessesTo copy in the original request: In the current world when a server blocks another the follow relationships are basically irrevocably broken. If a malicious or simply an errant push of a blocklist occurs what strategies exist to remedy this? Should fixing those be considered part of this effort? For the first one: this is an excellent feature suggestion! Currently, FSEP’s PRD only has the feature that has the UI display how significantly the follow/follower relationships would be impacted. For the second one: this is also a great suggestion. As it stands now, most instances rely on their admins’ communications (or lack thereof). Since most blocks wouldn’t be known until something is happening via an update, what do you think would be a good suggestion for how to communicate this on a timeline? In your opinion would a feature like this need to be part of the MVP release or could it be queued for future feature updates as the implementation details are discussed and figured out? The Breakdown by Section, sectionChallengesCovering this one in order also: firstly you outlined some very important challenges and thank you for doing so. You are also correct in what you stated about the similarities of BlockTogether - both FSEP’s blocklist management and BlockTogether have similar goals, which is the point of mentioning, but the implementation detail will be different and handling the scope correctly will be different. If we’re understanding correctly, there are a few meta concerns here. The first is scope of erroneous and/or opinionated blocking, and the second is primarily the “order of operations” that blocklists data is applied. To clarify:
To start to answer these concerns: these are definitely details that need to be ironed out. They can be discussed in the abstract here as well as in the implementation detail phase, once we get there. As FSEP is looking to start out with an MVP and build outwards, there is a lot of focus on how to handle moderating T0 sources. Growing outside of the T0-only moderation, though, what about syncing blocklists to moderate (in or out) Twitter bridges? Or bot-based instances, as used in the above example? 👉 We have some thoughts, of course, but we are interested in yours (the collective participants and potential participants of this Discussion thread). In a model where FSEP is showing multiple potential blocklists (moderated lists?) that servers can use, there could be tags or other categorical separations that differentiate between blocklists that are trying to solve the T0 problem, specifically, vs those that are trying to solve other problems like “Twitter bridges” and the like. For the order of operations problem, FSEP wouldn’t currently override how the underlying software is handling a user’s experience. To put it another way, it’d still be the union of the server and individual user’s blocking preferences. Though this does bring about an interesting potential future feature of FSEP’s blocklist management, which is: should / can FSEP extend to the user-level? Specifically, if an instance doesn’t want to moderate Twitter bridges (for any reason) and a user does, can there be a user-level ability to import a moderated list, the same as an instance can do for the instance-level? There’s also the secondary problem that you introduced / reintroduced wrt to user-level blocking. Currently, Mastodon at least has the union of moderated servers for an individual’s experience (as mentioned). Can, and should, FSEP be able to override this so that an individual can follow a moderated server? There would likely need to be nuance here, as there is a different scope of impact following Badgers Every Hour vs a T0 instance. There’d likely need to be understood compromise here, where instances defined as T0 would be moderated, period, but other instances might be override-able on the user-level. What do you (again, collective you) think? We’re going to continue to answer the other questions, but wanted to post the above as the response is getting lengthy and we wanted to give everyone something to read / respond to as we go through and 1 ) answer the other questions introduced that we’re answering with this post and 2 ) catch up more on other posts in the thread to include in our next post. Due to length, again it might be a couple days before we finish typing everything out. Thank you for your patience! |
Beta Was this translation helpful? Give feedback.
-
I am going to go ahead and put this out there, as I don't feel the response effectively addressed this. I believe The Bad Space has irreparably damaged trust among Fediverse users, and that it needs to be stripped from FSEP if this proposal has any path toward adoption. I am sorry if this sounds harsh. I am sure this seems, from some perspectives, to be fueled by anti-black sentiment on Fedi, or a massive overreaction to a bug that caused instances to accidentally appear. I know Ro has promised to pull back the curtain and reveal a bunch about his project that is supposed to explain everything and smooth over the issues of the past week(s). But the fact of the matter is - and this should be evident to anyone who has followed the discussion thus far - that The Bad Space, its creator, and its "trusted" sources have found themselves in a position of being exceedingly untrusted by a large number of users and administrators. And this is not something you can logic your way out of. Again: does this suck? Yeah, it probably does! It probably looks incredibly unfair and pre-emptively biased and a lot of other things! But, ultimately, you cannot force people to trust someone, and by moving forward with a proposal that enshrines a believed-untrustworthy person / site / list - instead of, say, proposing a NEW governance-based list / org, or leaving a hole here to be filled later (split the proposal?), etc - a lot of otherwise-reasonable admins and users who might normally welcome a proposal to import and manage shared blocklists are going to oppose it on this point alone. I don't think Nivenly have effectively grappled with this point yet, and it needs to be addressed. If a user later wishes to use the FSEP tools to import a blocklist from The Bad Space, that is the admin's own right. But making it the MVP, first-goal and integral part of the FSEP proposal is poisonous to a lot of people. EDIT TO ADD: I read from the replies in a lot of places that "The Bad Space" is a placeholder in here, as an example. But I need to stress that when you actually read the proposal (in its current form), that is not the case: for example, these quotes:
(no mention of other providers, all the mockups show The Bad Space, etc)
(this one hints at functionality, like clickable links to listings, that would need to be present for other blocklists - do you need to specify a Blocklist Format for FSEP compatibility?)
again: does a FSEP compatible blocklist need to support API search? is there a REST protocol (for example) being defined? these things need definition in FSEP if they intend to stand in for TBS... I may be overly pedantic on this point but until the FSEP is updated with language that makes it clear that the goal is multiple lists, and that TBS is not the MVP, and that the mockups show other choices etc... well then I feel my concern is justified :) The simplest solution IMO is to scale back the proposal to the basics: a defined blocklist format, a subscription method, integration screens into Mastodon, describe the effects of clicking the "apply" button. "Show me the receipts" or "Search this domain on this site" or whatever are all, I think, too provider-specific and well outside the scope of what is necessary for the basics here. |
Beta Was this translation helpful? Give feedback.
-
Thank you for the reply. Most of this I will need to respond to at times other than 2 AM, but this stood out to me immediately:
with respect, this seems to not at all be the case from a straight reading of the FSEP proposal? Like, it doesn't indicate that it is a "placeholder" or a "specific example." It calls it out as the first implementation and talks about 'vetted' providers. It says:
Emphasis on the "vetted providers" part. If the intent is that "The Bad Space" be an "example" here then it needs to be made clear that it is being used as an example and the language about the MVP being "working with The Bad Space" should be changed to another system, such as the already present and widely used FediBlockHole or even simple github static pages (or, for that matter, curl calls against any API endpoint with a defined format). Because what it says and what is being said here are two very different things in my eyes. If our interpretations of the document are divergent then that's fine, but then the document should be clarified so as to remove the ambiguity. I can respond to the rest of this later, but I want to underscore this in triplicate: The language of the FSEP proposal is unambiguous to me and it places The Bad Space, in particular, in a privileged position as the first and primary provider, with the ability to support more than even a single blocklist being relegated to P3—long after the initial implementation. The operator of The Bad Space is listed as a contributor without disclosing a conflict of interest or his relationship to The Bad Space, or what the structure of The Bad Space even is. Regardless of anything else about the proposal, regardless if the language about The Bad Space is even changed to indicate it as an example, if it is going forward in its current form it needs to disclose this relationship. Because at the moment someone could read it and not realize that the author had any relationship to The Bad Space and that's a Problem™. Especially, but not exclusively, because at least at the moment The Bad Space is a proprietary solution: the code is not open source, it is openly available, but that isn't the same thing as being under an open source license. Fortunately this one is easy to fix: the code can be made OSS and a COI disclosure can be attached. But even if that were fixed and it were made OSS: 1. A disclosure needs to be made and 2. IMO the language needs to be changed and the ability to import other blocklists needs to be considered part of Phase 1. Possibly starting with something as simple as a flat file. |
Beta Was this translation helpful? Give feedback.
-
I concur with @bhaibel, @dclements, and @greg-kennedy's concerns. Eventual compatibility with multiple blocklist providers does not make the FSEP neutral. By having (e.g.) Mastodon suggest specific blocklists to users, the software makes a statement of suitability of purpose, and that the blocklist's provider is trustworthy. This involves more than the technical challenges of aggregation, polling, and UX design. It raises the question of which providers and APIs are suitable for recommendation and support. This means evaluating the sociotechnical systems which produce those blocklists. To wit, preceding comments have suggested the use case for automated blocklisting is about blocking Nazis--a goal I think we all agree on! And yet: the upcoming version of The Bad Space lists on the home page babka.social, which advertises itself as "A site where you can be unapologetically Jewish, with a healthy, diverse community of Jews and Jewish allies." TBS indicates three silences from its source pool. I've checked all of those sources by hand: none disclose a reason for the silence. Google and Mastodon hashtag search haven't turned up anything obvious. Is this instance full of Nazis? Transphobia? Harassment? If I silence the instance, would I be protecting user safety or furthering antisemitism? I have no way to tell. How would the FSEP's proposed UI surface this scenario to admins and users? The instance I administer, woof.group, has three silences on TBS. I have no visibility into why we're on TBS--the process and people which produce it are largely opaque--but I suspect it's because we have a more relaxed CW policy around nudity. Two days ago mastodon.hypnoguys.com and kinky.business were listed as "hate speech" and "poor moderation". I'm sure we're all aware of the kerfuffle last week in which a number of trans and queer instances were also listed. Acknowledging that no instance's moderation is wholly unproblematic, and significant disagreements between progressive moderators exist, the FSEP's choice of TBS as initial blocklist provider raises obvious questions. If the FSEP had been implemented last week, what exactly would have happened to users of those instances? I've been doing community moderation for roughly a decade, I've been on Fediverse since 2016, and I've worked on large-scale data aggregation systems. If I were selecting aggregated blocklist providers for the FSEP to support, I'd ask questions like:
The answers to these questions can be institutionally rigorous or entirely informal--"one enby with strong opinions" is a valid way of producing a blocklist at least some people want to use! But when we talk about platform-level integration and recommendation of specific blocklists, these questions should at least be articulated. These kinds of sociotechnical and ontological questions also appear at the level of the FSEP itself. What level of organizational sophistication, trustworthiness, and stability does the FSEP expect from recommended blocklist providers? Does the platform have a controlled vocabulary of its own? How does the platform establish (or punt on) conformance with the varying vocabularies used by blocklists? How are these choices communicated to admins and users? What affordances do admins and users have to evaluate and offer feedback on blocklist accuracy? The answers to these questions have UX and protocol-level implications, but perhaps that's a matter for others to address. |
Beta Was this translation helpful? Give feedback.
-
I recommend considering scrapping both the term "worst of the worst" and "T0" as they both have a lot of history in fedi blocklist projects. T0 most commonly refers to the Oliphant Tier-0 list, which requires 60%+ agreement between all sources for inclusion. That is, it's a measure of consensus, and does not attempt to exclude 'opinionated blocking' that you mention. "Worst of the worst" I have heard applied both to the Tier 0 list and to The Bad Space, even though TBS requires only a 2/9 consensus to show entries, which is equivalent to Tier 3 in the Olphant blocklist algorithm. So these terms are applied broadly as you said. I think the proposal may benefit from focusing on specific block categories ("extremist views, fascism, gore against persons for any intentional reason, hate crimes, CSAM") without using these terms to prevent confusion. |
Beta Was this translation helpful? Give feedback.
-
An alternative approach can be taken by avoiding centralized blocklists all together. Instead of centralized blocklists the instance admin chooses a few sister instances that aligns with the views of the instance and the blocklist will be made from these sister instances blocklists. I think this aligns more with the core ideology of the fediverse being a decentralized social media. |
Beta Was this translation helpful? Give feedback.
-
I think it would be prudent if this whole project made some stronger commitments to mitigation potential risks to queer and trans women who are exposed to the strikingly carelessly put together blocklists made by bad faith actors, and further reasonable steps by nivenly should be taken to distance themself from individuals who might both discourage trans women and general queer participation in nivenly activites. There is no questioning that accepting the The Bad Space would be an outright transphobic action on behalf of nivenly, but further, apologies should be made promptly to the victims who have been caught in the crossfire of nivenly's careless actions. |
Beta Was this translation helpful? Give feedback.
-
Just in case anyone's missed it, Nivenly has published an update on FSEP, the key line being: The status of this project is on hold, pending the return of either the original maintainer or a handoff to a new one. |
Beta Was this translation helpful? Give feedback.
-
Overall Notes
First: Thank you for putting the proposal together. It is good to see work in this direction as it is a sorely needed area in the fediverse right now.
A lot of my questions and followups have to do with the relationship between FSEP and The Bad Space since the FSEP proposal directly privileges The Bad Space and there is no discussion I can find in the document about the implications of that choice or how to make it easy for other options to exist.
If this document were connecting to a protocol and The Bad Space happens to be an implementer then that would be one question, but it is notable that—despite being able to output in the Mastodon CSV format—the Bad Space uses a methodology for querying that is not well supported by existing management tools such as FediBlockHole (specifically, it uses a POST method for its search where FediBlockHole only uses GET).
Bookkeeping
The Bad Space
The proposal talks extensively about what the interface for connecting to and managing lists provided by The Bad Space should look like, but does not include a lot of information on how The Bad Space works.
Availability
There seems to be an assumption as well that The Bad Space (or other chosen server) will be permanently online and it would be good to discuss in the scope of this proposal how servers should handle it both on initial startup and on an ongoing basis if the site is down (or merely cannot be reached) for any reason.
Receipts
There seems to be a discrepancy between how the FSEP proposal talks about "receipts" (which is to say: it isn't mentioned) and how others have perceived what The Bad Space will provide in this area (c.f., oliphant on github which says "The receipts, in other words, are federated. They just aren't here. You'll have to search for them, but the Bad Space is coming online and should eventually have all the receipts for public availability").
Are0h has indicated on the fediverse that this is a hard problem to do without enabling further abuse, which may mean that the weight is on the word "eventually" (which is fair!), but it would be good to know—if this is supposed to be a central and first-out-of-the-door primary source—what the plans are here, whether we should expect a separate proposal covering the data handling aspects, or if this is in the "p4 won'tfix" level of eventuality.
Compatibility
Buy-In and Process
Current Processes
Breakdown By Section
Challenge
There seems to be a fundamental conflation here that I think needs to be called out explicitly and early in the process.
Specifically it calls out BlockTogether as having used a "similar" methodology and while this is superficially true it is worth noting that there is a key and important difference in the output of these two systems as well as how they operate.
This breaks down into three primary areas.
First: Individuals on the fediverse already "opt in" to a shared blocklist—that of their server's—much the way they would opt in to a blocklist on BlockTogether (except that the server's choices will override theirs and blocktogether would not). This is much more akin to a proposal for having the lists on BlockTogether subscribe to each other if everyone was prompted to import a mandatory BlockTogether list when signing up for a twitter account (I recognize that the proposal here is not mandatory for those signing up, in this analogy signing up is equivalent to joining an individual server, not the server's decision to import a list).
Second: With BlockTogether individuals would opt-in to shared blocklists of individuals (not those who had subscribed to a different list). This means that a 1% false positive rate of a 10k person list would result in 100 individuals being erroneously blocked.
With this proposal servers opt-in to shared blocklists of other servers. False positives are going to be much more granular than the server level: if you have a 100 person instance with 5 bad actors the other 95 might know nothing about it.
There still might be reasons to block! That's a perfectly reasonable decision! But is the false positive in this case 1 due to the server being a single server that is a false positive, 0 due to the server having poor moderation, or 95% of the users on the server since they aren't doing anything untoward? Consideration of this question feels essential and the document, as far as I can tell, doesn't discuss this question.
Finally: Blocktogether would respect your downstream decisions as an individual user. If you chose to follow Z and Z ended up on the list they would remain unblocked, but in this case users don't get that option: the server will override their choice. I'd love to see more on the potential consequences here.
Onboarding
Emphasis added.
What's a Tier 0 list? While I know roughly what it means it is a) not defined anywhere in this doc and b) the production of a Tier 0 list is not listed as a requirement for The Bad Space to be adopted. Is The Bad Space putting forth that their entire list is a Tier 0 list or is there going to be subdivision in this respect in the future?
I'd also call out the "vetted" part: who does this vetting? What is the process for getting on, or off, a Tier 0 list?
One of the problems that occurred with BlockTogether was that doing this merge successfully turns out to be difficult in practice when you also factor in removals, latency, systems going offline, manual overrides, etc. The protocol for this needs to be made explicit because mastodon CSV is simply inadequate for that level of nuance.
Deny List Management
The document says "Local status overrides will take precedence over imported instance statuses" but what about conflicts between lists? If I imported a list that I don't update and another one that I do and something is removed by the imported list that had been there previously (and is still on the other, now significantly older, list) what should the behavior be?
This also raises the important question: what about non-binary decisions, such as restricting links or not downloading media? This information is often part of the set of tools admins have at their disposal, but how will conflicts between lists be resolved?
Moderation Data
It says that:
Which raises the question to me of sequencing: should updates always be manually vetted before being applied if they cause a significant change here?
The document says:
While I think I know what this means could an illustration or example be included so that I know for sure that my guess is correct?
Conclusions
Thanks again for putting this proposal together and for committing to an open discussion! A good management system here feels vitally important for improving the safety of the fediverse.
Beta Was this translation helpful? Give feedback.
All reactions