A smart contract you have never interacted with is asking for an approval. Your wallet modal is open. The spender address is unfamiliar. You have about ninety seconds before you make a decision, because in ninety seconds you will either click "Confirm" or you will click "Reject" and if you click "Reject" after too long the frontend assumes you are lost and drops you back to its landing page.
Ninety seconds is not enough time to read the full contract source. It is enough time to apply four cheap lenses that, in combination, get you most of the way to a good decision without needing to be a Solidity engineer. This post describes those four lenses, the order to apply them, and what signals to walk away on.
Throughout the piece, the working assumption is that you have a block explorer open (Etherscan, BaseScan, Arbiscan, etc., matching the chain of the contract). You do not need Foundry, you do not need Tenderly, and you do not need a mainnet fork. What you need is the ability to read a couple of tabs.
Lens 1 — Bytecode (5 seconds)
Paste the spender address into the block explorer. The first signal is in the first tab you land on, under "Contract": there is either source code or there is not. If the "Contract" tab shows nothing — just raw bytecode with no "Source Code" section — the contract is unverified. Its source was never published to the explorer; what the chain is executing is opaque to anyone without a decompiler.
Unverified ≠ malicious. Plenty of legitimate contracts ship unverified, particularly early or on L2s where verification tooling was immature. But unverified does mean you cannot apply Lens 2 or Lens 3. For a one-time, small-value interaction with a team you recognise by name, this may be acceptable. For an unlimited approval on a fresh wallet, it is not.
Five-second verdict: verified → continue to Lens 2. Unverified → walk away or dramatically cap the approval amount.
Lens 2 — Source (20 seconds)
Open the "Contract → Source Code" tab. You will see a list of files (most contracts are multi-file). Look at the top-level contract name — the one inheriting from the others. It tells you what the contract thinks it is: Router, Pool, Bridge, Airdropper, Forwarder, Proxy, etc. The name is a strong signal.
Scroll the main file for two things: (a) the public-function list, (b) the imports. Ctrl-F for function tells you what the contract can do. Look specifically for these names:
execute,delegatecall,multicall,swap,bridge— legitimate in routers, aggregators, bridges. Expected.rescue,sweep,emergencyWithdraw,withdrawAll— privileged functions. If an owner can pull any token that has approved the contract, you want to understand the owner relationship before signing (Lens 4).transferFromcalled on an arbitrary token with arbitrary parameters — this is the drainer pattern. Very few legitimate contracts take arbitrary(token, from, to, amount)from untrusted input.setApprovalForAllon anything other than the contract's own NFT collection — suspicious.
The imports tell you the inheritance. Common legitimate imports: OpenZeppelin Ownable, ReentrancyGuard, Pausable, ERC20, ERC721. Uniswap IUniswapV2Router, IV3SwapRouter. These are reassurance, not proof — drainers also import OpenZeppelin — but their absence, combined with a fresh deployer, is a flag.
Twenty-second verdict: function shape matches the contract's claimed role → continue. Function shape shows drain primitives the claimed role does not need → walk away.
Lens 3 — Deployment history (20 seconds)
Switch from "Contract" to "Events" or back to the main address view, and look at the first transaction: the Contract Creation. Two things to read.
First, the deployer address. Click through to the deployer's profile. The question is: has this deployer shipped other recognisable contracts? A deployer with a dozen other contracts over two years, mostly verified, mostly interacted with by many addresses, is a reputable entity. A deployer with one contract, deployed three hours ago, funded five hours ago from Tornado Cash or a known mixer, is the opposite.
Second, the age. Contracts deployed within the last 72 hours deserve extra scrutiny for user-approval flows. Most drainers are short-lived — phishers deploy, collect, abandon. A contract three weeks old with ten thousand interactions is qualitatively different from a contract deployed six hours ago with forty interactions (all from the same flow).
Twenty-second verdict: deployer has a track record and the contract is not freshly-deployed → continue. Fresh deployment from a mixer-funded deployer → walk away.
Lens 4 — On-chain behaviour (30 seconds)
The "Transactions" tab tells you how the contract is being used. Scroll the most recent 50 transactions and look for these patterns:
- Many users, small amounts each. Normal retail usage. The contract is being used as intended by a distribution of wallets. Reassuring.
- A few users, escalating amounts. Interesting but inconclusive. Could be a concentrated product (treasury tool, institutional integration). Could be rehearsal for an extraction.
- Many users approve, no-one withdraws. The contract is collecting approvals but not doing anything else. Classic drainer setup.
- Constant calls from one or two addresses. Either a bot is farming the contract (could be legitimate, like an aggregator routing orders) or the contract is owned by those addresses and executes a narrow workflow.
- Withdrawals to a different address than deposits come from. If funds come in from users and leave to a non-user address, read that non-user address carefully. Is it the protocol treasury (labelled)? Is it an unlabelled new address that then forwards to a mixer?
The "Token Transfers" tab is the sibling view — it shows which ERC-20s the contract is moving. For a claimed DEX router, you would expect the router to see every token under the sun pass through. For an NFT marketplace, you would expect setApprovalForAll calls followed by occasional transferFrom during sales. For a "yield aggregator," you would expect deposits and withdrawals of the underlying assets plus the aggregator's receipt token. Pattern mismatches between the contract's claimed role and its actual token movements are strong signals.
Thirty-second verdict: activity matches the contract's claimed role → continue to sign. Activity mismatches — especially the "many approvals, no withdrawals" pattern, or the "withdrawals to a mixer-adjacent address" pattern — walk away.
Putting the four lenses together
The combined verdict lives in how the four lenses agree. A contract that passes all four (verified source, function shape matches claimed role, reputable deployer, on-chain behaviour consistent) is a safe-enough sign. A contract that fails any one is worth at least reading more carefully; a contract that fails two or more is not worth the risk for most users in most contexts.
The honest caveat: this method is not an audit. An auditor reading the same contract for eight hours may find a subtle reentrancy bug you missed in thirty seconds. But the ninety-second read captures the large majority of outright-malicious contracts, because malicious contracts very rarely spend resources passing all four lenses — the payoff per contract is small and the operator moves on.
Worked example — the Socket bridge, January 2024
For grounding, apply the four lenses to what we now know about the Socket bridge incident. At the time of user approvals in 2023, Socket's contracts would have scored:
- Lens 1 (bytecode): PASS. Verified on Etherscan and BaseScan.
- Lens 2 (source): PASS. Function names consistent with a cross-chain bridge (
bridge,swapAndBridge, standard OZ imports). The vulnerable function's signature was not obviously dangerous to a quick read. - Lens 3 (history): PASS. Socket was a reputable, long-running project with many interactions.
- Lens 4 (behaviour): PASS. Real bridging traffic, many users, withdrawals to labelled protocol addresses.
All four lenses would have given Socket a green light. The exploit was a subsequently-discovered logic flaw in how the contract passed call data to external tokens — the kind of thing only a deeper audit catches. A user following the four-lens method in 2023 would have granted the approval with reasonable confidence; the later exploit vindicated the need for the approval-revocation habit that ended up mitigating the loss for users who had cleared their allowances.
That's the second lesson of the four-lens method: it catches categorical fraud (drainer contracts, phishing proxies) far better than it catches latent protocol bugs. Pair it with the hygiene habit of revoking approvals you are not actively using, and you are covering both exposure classes with roughly the time a single interaction demands.
If you want a tool
The four lenses are something you can apply manually in ninety seconds with a block explorer. If you would rather a tool apply them and produce a summary, our widget at allowanceguard.com/docs/widget embeds a pre-transaction review pane in dApps that integrate it, showing the key signals from lenses 1–4 for the spender of any approval you are about to sign. For post-hoc review of approvals you have already signed, the main scanner at allowanceguard.com applies the same catalogue.
But the method is the point, not the tool. A reader of this post with a block explorer tab open is most of the way there on their own. Tools shorten the time; they do not invent the judgement.

