You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+37-4
Original file line number
Diff line number
Diff line change
@@ -1,21 +1,28 @@
1
-
# 🗑️ `moderate` - Moderate and block bad words from your Rails app
1
+
# 👮♂️ `moderate` - Moderate and block bad words from your Rails app
2
2
3
-
`moderate` is a Ruby gem that moderates user-generated content by adding a simple validation to block bad words in any text field.
3
+
`moderate` is a Ruby gem that moderates user-generated text content by adding a simple validation to block bad words in any text field.
4
4
5
5
Simply add this to your model:
6
6
7
7
```ruby
8
8
validates :text_field, moderate:true
9
9
```
10
10
11
-
That's it! You're done.
11
+
That's it! You're done. `moderate` will work seamlessly with your existing validations and error messages.
12
+
13
+
> [!WARNING]
14
+
> This gem is under development. It currently only supports a limited set of English profanity words. Word matching is very basic now, and it may be prone to false positives, and false negatives. I use it for very simple things like preventing new submissions if they contain bad words, but the gem can be improved for more complex use cases and sophisticated matching and content moderation. Please consider contributing if you have good ideas for additional features.
12
15
13
16
# Why
14
17
15
-
Any text field where users can input text may be a place where bad words can be used. This gem blocks records from being created if they contain bad words.
18
+
Any text field where users can input text may be a place where bad words can be used. This gem blocks records from being created if they contain bad words, profanity, naughty / obscene words, etc.
16
19
17
20
It's good for Rails applications where you need to maintain a clean and respectful environment in comments, posts, or any other user input.
18
21
22
+
# How
23
+
24
+
`moderate` currently downloads a list of ~1k English profanity words from the [google-profanity-words](https://github.com/coffee-and-fun/google-profanity-words) repository and caches it in your Rails app's tmp directory.
25
+
19
26
## Installation
20
27
21
28
Add this line to your application's Gemfile:
@@ -30,6 +37,32 @@ And then execute:
30
37
bundle install
31
38
```
32
39
40
+
Then, just add the `moderate` validation to any model with a text field:
41
+
42
+
```ruby
43
+
validates :text_field, moderate:true
44
+
```
45
+
46
+
`moderate` will raise an error if a bad word is found in the text field, preventing the record from being saved.
47
+
48
+
It works seamlessly with your existing validations and error messages.
49
+
50
+
## Configuration
51
+
52
+
You can configure the `moderate` gem behavior by adding a `config/initializers/moderate.rb` file:
# Exclude words from the default list (false positives)
62
+
config.excluded_words = ["good"]
63
+
end
64
+
```
65
+
33
66
## Development
34
67
35
68
After checking out the repo, run `bin/setup` to install dependencies. Then, run `rake spec` to run the tests. You can also run `bin/console` for an interactive prompt that will allow you to experiment.
0 commit comments