Manually Creating the cookies file when X/Twitter login fails

If you are unable to login using the .env file X/Twitter credentials, you can manually create the cookie file.

Prerequisites

A paid X/Twitter Pro Account is absolutely necessary for scraping X/Twitter data. Ensure you have obtained a paid Twitter Pro Account before proceeding with the configuration.

  • Do not use Twitter accounts that you care about, since there is a small risk of them being suspended. In such cases, you will still be able to scrape with those credentials, but posting will be suspended.**
1

Step 1

Verify that the twitter_cookies.json.example file exists

This step requires you run your Masa Node in local mode.

Go to ~/.masa directory and verify that the twitter_cookies.json.example file exists.

If the twitter_cookies.json.example file is missing, make sure your .env file includes your X/Twitter login credentials. Then, initiate an API call to generate the cookies. The cookies will be created during the first API call using your X/Twitter credentials if the cookie file is not present.

If the cookie file is not created, then try creating it manually. Go to step 2.

2

Step 2

Retrieve your X/Twitter cookies from the browser by visiting X/Twitter

  1. Follow these steps to retrieve your X/Twitter cookies from the browser:
3

Step 3

Create and add cookies values

  1. Create the [username]_twitter_cookies.json cookie file using the nano editor.
nano [username]_twitter_cookies.json
  1. Refer to the template file twitter_cookies.json.example and copy and paste the cookie values into the [username]_twitter_cookies.json file.
  2. Get the following cookie values from the browser and paste them into the [username]_twitter_cookies.json file:
    • personalization_id
    • kdt
    • twid
    • ct0
    • auth_token
    • att
  3. Save the file and exit the nano editor by pressing CTRL+X, then Y to confirm, and Enter to save.
4

Step 4

Restart your miner node

Restart your Masa node to apply the changes.

make run
5

Step 5

Test the X/Twitter scraper

Make a curl API call to confirm the X/Twitter scraper is working and cookies are correctly set.

Curl the node in local mode to confirm it returns X/Twitter data:

curl -X 'POST' \
  'http://localhost:8080/api/v1/data/twitter/tweets/recent' \
  -H 'accept: application/json' \
  -H 'Content-Type: application/json' \
  -d '{
  "query": "$Masa AI",
  "count": 1
}'

You should receive a response similar to this:

{
  "data": [
    {
      "Error": null,
      "Tweet": {
        "ConversationID": "1828797710385942907",
        "GIFs": null,
        "HTML": "<a href=\"https://twitter.com/CryptoGodJohn\">@CryptoGodJohn</a> $MASA the leading token for <a href=\"https://twitter.com/hashtag/AI\">#AI</a> and <a href=\"https://twitter.com/hashtag/Data\">#Data</a> <br><a href=\"https://twitter.com/gesepolia Masafi\">@gesepolia Masafi</a>",
        "Hashtags": ["AI", "Data"],
        "ID": "1828900558452797478"
        // ... (other Tweet fields)
      }
    }
  ],
  "workerPeerId": "16Uiu2HAmSCQMh22Xmo1GMxXB73qRx3YaVqqL1UwTYn3iNvQLjPB5"
}

Verify that the workerPeerId in the response matches your node’s peerID.

FAQs Creating cookies for multiple X/Twitter accounts and testing

Security Considerations

  • Ensure your [username]_twitter_cookies.json file has appropriate permissions (e.g., chmod 600).
  • Keep your X/Twitter credentials secure and do not share them.
  • Never commit your [username]_twitter_cookies.json file to version control.