-
-
Notifications
You must be signed in to change notification settings - Fork 55
Scripting Tookie
You can create a bash script to run Tookie-OSINT with predefined usernames and options. Here's an example:
-
Create a Script: Save the following script as
automate_tookie.sh
:#!/bin/bash # List of usernames to search usernames=("john_doe" "jane_doe" "example_user") # Loop through each username and run Tookie-OSINT for username in "${usernames[@]}" do echo "Searching for $username" python3 brib.py --username "$username" --site twitter,facebook done
-
Make the Script Executable:
chmod +x automate_tookie.sh
-
Run the Script:
./automate_tookie.sh
You can schedule the script to run at specific intervals using cron jobs:
-
Edit Crontab:
crontab -e
-
Add a Cron Job: Add the following line to run the script daily at midnight:
0 0 * * * /path/to/automate_tookie.sh
For Windows, you can use Task Scheduler to automate the script:
-
Create a Batch File: Save the following as
automate_tookie.bat
:@echo off python.exe C:\path\to\brib.py --username john_doe --site twitter,facebook python.exe C:\path\to\brib.py --username jane_doe --site twitter,facebook
-
Schedule the Task:
- Open Task Scheduler.
- Create a new task.
- Set the trigger to your desired schedule.
- Set the action to run the batch file.
You can also automate it directly within a Python script:
import os
import time
# List of usernames to search
usernames = ["john_doe", "jane_doe", "example_user"]
# Function to run Tookie-OSINT
def run_tookie(username):
os.system(f"python3 brib.py --username {username} --site twitter,facebook")
# Loop through each username and run Tookie-OSINT
for username in usernames:
print(f"Searching for {username}")
run_tookie(username)
time.sleep(10) # Add delay if needed
You can schedule this Python script using cron jobs or Task Scheduler as described above.
These methods should help you automate Tookie-OSINT effectively.
-
Install Tookie-OSINT: First, ensure you have Tookie-OSINT installed. If not, follow these steps:
git clone https://github.com/Alfredredbird/tookie-osint.git cd tookie-osint pip3 install -r requirements.txt
-
Create a Configuration File: You can create a configuration file to store your usernames and options. For example, create a file named
config.txt
:john_doe --site twitter,facebook jane_doe --site instagram,linkedin
-
Modify the Script to Read from the Configuration File: Create a Python script named
headless_tookie.py
to read from the configuration file and run Tookie-OSINT:import os # Read the configuration file with open('config.txt', 'r') as file: lines = file.readlines() # Loop through each line in the configuration file for line in lines: username, options = line.strip().split(' ', 1) command = f"python3 brib.py --username {username} {options}" os.system(command)
-
Run the Script: Execute the script to run Tookie-OSINT in headless mode:
python3 headless_tookie.py
Here’s an example of what your config.txt
might look like:
john_doe --site twitter,facebook
jane_doe --site instagram,linkedin
example_user --site github,reddit
You can schedule the headless_tookie.py
script to run at specific intervals using cron jobs or Task Scheduler:
-
Cron Job:
crontab -e
Add the following line to run the script daily at midnight:
0 0 * * * /usr/bin/python3 /path/to/headless_tookie.py
-
Task Scheduler:
- Open Task Scheduler.
- Create a new task.
- Set the trigger to your desired schedule.
- Set the action to run the Python script.
This setup allows you to run Tookie-OSINT in headless mode, automating the process without manual intervention.
-
Modify the Script: Open the
brib.py
script (or the main script you are using) and locate the section where the results are printed or saved. You can customize this part to format the output as needed. -
Example: JSON Output: If you want to output the results in JSON format, you can use the
json
module in Python. Here’s an example of how to modify the script:import json # Example result data results = { "username": "john_doe", "sites": { "twitter": "https://twitter.com/john_doe", "facebook": "https://facebook.com/john_doe" } } # Convert results to JSON format json_output = json.dumps(results, indent=4) # Save to a file with open('results.json', 'w') as file: file.write(json_output)
-
Example: CSV Output: If you prefer CSV format, you can use the
csv
module. Here’s an example:import csv # Example result data results = [ {"username": "john_doe", "site": "twitter", "url": "https://twitter.com/john_doe"}, {"username": "john_doe", "site": "facebook", "url": "https://facebook.com/john_doe"} ] # Define CSV file headers headers = ["username", "site", "url"] # Write results to CSV file with open('results.csv', 'w', newline='') as file: writer = csv.DictWriter(file, fieldnames=headers) writer.writeheader() writer.writerows(results)
-
Example: HTML Output: For HTML output, you can use basic HTML formatting. Here’s an example:
# Example result data results = { "username": "john_doe", "sites": { "twitter": "https://twitter.com/john_doe", "facebook": "https://facebook.com/john_doe" } } # Create HTML content html_content = f""" <html> <head><title>OSINT Results</title></head> <body> <h1>Results for {results['username']}</h1> <ul> <li>Twitter: <a href="{results['sites']['twitter']}">{results['sites']['twitter']}</a></li> <li>Facebook: <a href="{results['sites']['facebook']}">{results['sites']['facebook']}</a></li> </ul> </body> </html> """ # Save to an HTML file with open('results.html', 'w') as file: file.write(html_content)
You can integrate these modifications into your automation script to ensure the output is always in your desired format. For example, you can update the headless_tookie.py
script to include the custom output formatting.
-
Identify the Additional Information: Determine what extra details you want to include in the output for each social media site. For example, you might want to add follower counts, account creation dates, or specific metadata.
-
Modify the Script: Open the
brib.py
script (or the main script you are using) and locate the section where the results are processed. Add code to handle each site separately.
Here’s an example of how to modify the script to customize the JSON output for Twitter, Facebook, and Instagram:
import json
import datetime
# Example result data
results = {
"username": "john_doe",
"search_time": str(datetime.datetime.now()),
"search_parameters": {
"sites": ["twitter", "facebook", "instagram"]
},
"accounts_found": {
"twitter": {
"url": "https://twitter.com/john_doe",
"followers": 1500,
"account_creation_date": "2010-05-15"
},
"facebook": {
"url": "https://facebook.com/john_doe",
"friends": 300,
"account_creation_date": "2009-08-20"
},
"instagram": {
"url": "https://instagram.com/john_doe",
"followers": 2000,
"account_creation_date": "2012-03-10"
}
}
}
# Customizing output for specific sites
custom_output = {}
if "twitter" in results["accounts_found"]:
custom_output["twitter"] = {
"profile_url": results["accounts_found"]["twitter"]["url"],
"followers_count": results["accounts_found"]["twitter"]["followers"],
"created_on": results["accounts_found"]["twitter"]["account_creation_date"]
}
if "facebook" in results["accounts_found"]:
custom_output["facebook"] = {
"profile_url": results["accounts_found"]["facebook"]["url"],
"friends_count": results["accounts_found"]["facebook"]["friends"],
"created_on": results["accounts_found"]["facebook"]["account_creation_date"]
}
if "instagram" in results["accounts_found"]:
custom_output["instagram"] = {
"profile_url": results["accounts_found"]["instagram"]["url"],
"followers_count": results["accounts_found"]["instagram"]["followers"],
"created_on": results["accounts_found"]["instagram"]["account_creation_date"]
}
# Convert custom output to JSON format
json_output = json.dumps(custom_output, indent=4)
# Save to a file
with open('custom_results.json', 'w') as file:
file.write(json_output)
If you prefer CSV format, you can customize the output for specific sites as follows:
import csv
# Example result data
results = [
{"username": "john_doe", "site": "twitter", "url": "https://twitter.com/john_doe", "followers": 1500, "account_creation_date": "2010-05-15"},
{"username": "john_doe", "site": "facebook", "url": "https://facebook.com/john_doe", "friends": 300, "account_creation_date": "2009-08-20"},
{"username": "john_doe", "site": "instagram", "url": "https://instagram.com/john_doe", "followers": 2000, "account_creation_date": "2012-03-10"}
]
# Define CSV file headers
headers = ["username", "site", "url", "followers/friends", "account_creation_date"]
# Write results to CSV file
with open('custom_results.csv', 'w', newline='') as file:
writer = csv.DictWriter(file, fieldnames=headers)
writer.writeheader()
for result in results:
if result["site"] == "twitter":
writer.writerow({
"username": result["username"],
"site": result["site"],
"url": result["url"],
"followers/friends": result["followers"],
"account_creation_date": result["account_creation_date"]
})
elif result["site"] == "facebook":
writer.writerow({
"username": result["username"],
"site": result["site"],
"url": result["url"],
"followers/friends": result["friends"],
"account_creation_date": result["account_creation_date"]
})
elif result["site"] == "instagram":
writer.writerow({
"username": result["username"],
"site": result["site"],
"url": result["url"],
"followers/friends": result["followers"],
"account_creation_date": result["account_creation_date"]
})
You can integrate these modifications into your automation script to ensure the output is always customized for specific sites. For example, update the headless_tookie.py
script to include the custom output formatting.
-
Identify the Criteria: Determine the criteria for adding custom tags or labels. For example, you might want to tag accounts with a high number of followers or accounts created before a certain date.
-
Modify the Script: Open the
brib.py
script (or the main script you are using) and locate the section where the results are processed. Add code to evaluate the criteria and assign tags or labels accordingly.
Here’s an example of how to modify the script to add custom tags based on follower count and account creation date:
import json
import datetime
# Example result data
results = {
"username": "john_doe",
"search_time": str(datetime.datetime.now()),
"search_parameters": {
"sites": ["twitter", "facebook", "instagram"]
},
"accounts_found": {
"twitter": {
"url": "https://twitter.com/john_doe",
"followers": 1500,
"account_creation_date": "2010-05-15"
},
"facebook": {
"url": "https://facebook.com/john_doe",
"friends": 300,
"account_creation_date": "2009-08-20"
},
"instagram": {
"url": "https://instagram.com/john_doe",
"followers": 2000,
"account_creation_date": "2012-03-10"
}
}
}
# Customizing output with tags
custom_output = {}
for site, data in results["accounts_found"].items():
tags = []
if site == "twitter" and data["followers"] > 1000:
tags.append("high_followers")
if site == "facebook" and data["friends"] > 500:
tags.append("popular")
if datetime.datetime.strptime(data["account_creation_date"], "%Y-%m-%d") < datetime.datetime(2015, 1, 1):
tags.append("veteran")
custom_output[site] = {
"profile_url": data["url"],
"followers/friends": data.get("followers", data.get("friends")),
"created_on": data["account_creation_date"],
"tags": tags
}
# Convert custom output to JSON format
json_output = json.dumps(custom_output, indent=4)
# Save to a file
with open('custom_results.json', 'w') as file:
file.write(json_output)
If you prefer CSV format, you can customize the output to include labels as follows:
import csv
import datetime
# Example result data
results = [
{"username": "john_doe", "site": "twitter", "url": "https://twitter.com/john_doe", "followers": 1500, "account_creation_date": "2010-05-15"},
{"username": "john_doe", "site": "facebook", "url": "https://facebook.com/john_doe", "friends": 300, "account_creation_date": "2009-08-20"},
{"username": "john_doe", "site": "instagram", "url": "https://instagram.com/john_doe", "followers": 2000, "account_creation_date": "2012-03-10"}
]
# Define CSV file headers
headers = ["username", "site", "url", "followers/friends", "account_creation_date", "tags"]
# Write results to CSV file
with open('custom_results.csv', 'w', newline='') as file:
writer = csv.DictWriter(file, fieldnames=headers)
writer.writeheader()
for result in results:
tags = []
if result["site"] == "twitter" and result["followers"] > 1000:
tags.append("high_followers")
if result["site"] == "facebook" and result["friends"] > 500:
tags.append("popular")
if datetime.datetime.strptime(result["account_creation_date"], "%Y-%m-%d") < datetime.datetime(2015, 1, 1):
tags.append("veteran")
writer.writerow({
"username": result["username"],
"site": result["site"],
"url": result["url"],
"followers/friends": result.get("followers", result.get("friends")),
"account_creation_date": result["account_creation_date"],
"tags": ", ".join(tags)
})
You can integrate these modifications into your automation script to ensure the output is always customized with tags or labels based on site-specific criteria. For example, update the headless_tookie.py
script to include the custom output formatting.