🪂AI-powered Airdrop Finder
Find the best airdrops, effortlessly.
Last updated
Find the best airdrops, effortlessly.
Last updated
Because searching for airdrops is difficult, dangerous, and time-consuming, we’ve developed our own AI-powered airdrop finder.
➡️ With our Airdrop Finder, effortlessly discover the hottest ongoing airdrops and stay ahead on upcoming opportunities. Never miss the next big win in the cryptocurrency space!
🤖 Automation: Every airdrop is automatically fetched from social networks and the web. All steps, airdrop states, grades, etc., are generated by the latest state-of-the-art AI models using all airdrop-related sources.
🔐 Security: Every link is verified to ensure users are not redirected to scammy sites. We use a combination of a "known links" whitelist and a grading system based on the similarity of known words (for example, app.rad4ardrop.com would receive a bad grade) to ensure the user’s safety. We continuously update our database with public APIs to ensure the highest level of security and efficiency.
🏎️ Farming efficiency: Every airdrop is sorted by default by its "temperature" (based on its popularity on the platform) so users can easily find the trendiest airdrops. Users can also farm the most recently fetched airdrops, NFT-related airdrops, and more.
We can split the process into 3 parts:
We primarily use X (formerly Twitter) and its API to retrieve airdrops from discussions. We search for specific keywords with the Search API. Every post and its author go through a filtering process to ensure the content is relevant and secure.
Also, we use a web scraper to find web3-related websites and search for documentation, whitepapers, etc. We look for words like “airdrops”, “incentives & rewards”, etc. to identify which projects might lead to an airdrop. The websites go through the same filtering process as the posts, ensuring their security.
For every website we scrape, we ensure that we are allowed by checking the robots.txt file for our agent name: "RD-Agent".
Every post, website, or document we fetch is stored as a "source" in a BullMQ queue. Each source is then processed one by one by our fine-tuned and pre-trained models from OpenAI.
We use a technique called "decomposition" that allows models to be specific for each task, making them more efficient. We extract information like the name, description of the project, type of airdrop, etc., in about 10 different AI inferences.
The steps are then generated by another specifically trained model.
For an airdrop to be listed on RadarDrop, we wait for multiple trusted sources to be fetched and processed.
Each time an airdrop is listed on RadarDrop, we turn all the information into different documents and store them in a Chroma vector database. It ensures data is stored efficiently and allows users to retrieve information quickly.
The airdrops, on the other hand, are stored in a Postgres database, which is currently one of the most efficient SQL databases.