Skip to content

refactor: consolidate fee splitting functions in Gateway contract#80

Open
onahprosper wants to merge 16 commits intomainfrom
processOnrampSettlement
Open

refactor: consolidate fee splitting functions in Gateway contract#80
onahprosper wants to merge 16 commits intomainfrom
processOnrampSettlement

Conversation

@onahprosper
Copy link
Copy Markdown
Collaborator

@onahprosper onahprosper commented Jan 24, 2026

Description

This PR refactors the Gateway contract to eliminate code duplication in fee handling by consolidating duplicate fee splitting functions into single reusable implementations.

Changes made:

  • Unified fee splitting logic for both `settleIn` and `settleOut` functions
  • Removed duplicate `_handleLocalTransferFeeForSettleIn` function
  • Removed duplicate `_handleFxTransferFeeForSettleIn` function
  • Updated `settleIn` to use the consolidated `_handleLocalTransferFeeSplitting` and `_handleFxTransferFeeSplitting` functions
  • Both functions now properly handle fee splitting based on token rate (local vs FX transfers)

Benefits:

  • Reduced code duplication and improved maintainability
  • Consistent fee handling logic across settlement functions
  • Easier to update fee logic in the future as it's centralized

References

Related to the Gateway contract fee handling implementation.

Testing

  • This change adds test coverage for new/changed/fixed functionality

Manual testing:

  • Verified that `settleIn` now uses the same fee splitting logic as `settleOut`
  • Confirmed proper handling of local transfers (rate = 100) and FX transfers (rate != 100)
  • Ensured fee distribution to sender, provider, and treasury remains consistent

Checklist

  • I have added documentation for new/changed functionality in this PR
  • All active GitHub checks for tests, formatting, and security are passing
  • The correct base branch is being used, if not `main`

By submitting a PR, I agree to Paycrest's Contributor Code of Conduct and Contribution Guide.

Summary by CodeRabbit

  • New Features

    • Added settleIn settlement flow for inbound token settlements alongside existing settleOut outbound flow
    • Added ProviderBatchCallAndSponsor contract enabling sponsored batch transaction execution with signature verification
    • Gateway now deployed behind TransparentUpgradeableProxy for upgradeability
  • Bug Fixes & Updates

    • Refactored settlement fee-splitting logic to properly handle local vs FX transfer paths
    • Updated event signatures for enhanced settlement tracking; renamed OrderSettled to SettleOut and added SettleIn event
  • Infrastructure

    • Upgraded to ethers.js v6 and Hardhat v3 with new plugin tooling
    • Migrated codebase to ES modules
    • Added Ignition deployment modules for Gateway proxy and ProviderBatchCallAndSponsor contracts

@onahprosper onahprosper requested a review from chibie as a code owner January 24, 2026 01:02
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Jan 24, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

The PR refactors the Gateway settlement mechanism by renaming settle() to settleOut() and adding a new settleIn() function for in-token settlement, introduces a new ProviderBatchCallAndSponsor contract for batched off-chain-signed execution, upgrades tooling to Hardhat v3 and ethers v6 with ESM support, and updates all related artifacts, type definitions, tests, and fixtures accordingly.

Changes

Cohort / File(s) Summary
Gateway Settlement Flow Refactoring
contracts/Gateway.sol, contracts/interfaces/IGateway.sol
Renamed settle()settleOut(), added new settleIn() function with token transfer and fee handling. Updated internal fee-splitting signatures to include _senderFeeRecipient and explicit _token parameters. Changed events from OrderSettled to SettleOut and added SettleIn event. Updated SenderFeeTransferred event signature to include _orderId.
Batch Execution Contract
contracts/ProviderBatchCallAndSponsor.sol, contracts/ProviderBatchCallAndSponsor.sol/ProviderBatchCallAndSponsor.json
New contract enabling off-chain-signed batched call execution with nonce-based replay protection. Features ECDSA signature recovery, structured Call struct with fields (address to, uint256 value, bytes data), and events CallExecuted and BatchExecuted.
Test Files & Utilities
test/gateway/gateway.settleOrder.test.js, test/utils/utils.manager.js, test/fixtures/*.js, test/gateway/gateway.ownable.test.js
Migrated to ESM syntax, replaced BigNumber with native BigInt (1n, 100000n, etc.), updated gateway events from OrderSettled to SettleOut/SettleIn, added comprehensive test suite for settleIn with FX/local variants and fee splitting validation. Updated fixture deployment to use manual proxy pattern instead of hardhat-upgrades.
Hardhat Configuration & Package
hardhat.config.ts, package.json
Refactored hardhat.config.ts from HardhatUserConfig object to defineConfig() pattern, added plugins array, updated networks with type definitions, introduced npmFilesToBuild and per-compiler optimizer settings, replaced etherscan/customChains with verify/chainDescriptors sections. Updated package.json to declare ESM ("type": "module"), replaced dependencies (ethers ^5 → ^6, hardhat ^2 → ^3), added @nomicfoundation tooling.
Artifact & Type Definition Generation
artifacts/**/*.json, artifacts/**/*.d.ts, types/ethers-contracts/**/*.ts
Updated all artifact formats from hh-sol-artifact-1 to hh3-artifact-1. Deleted legacy OpenZeppelin artifact files (OwnableUpgradeable, Initializable, PausableUpgradeable, AddressUpgradeable, ContextUpgradeable, ERC20, IERC20, IERC20Metadata, Context). Added new artifacts and TypeScript typings for ProxyAdmin, TransparentUpgradeableProxy, Gateway, GatewaySettingManager, ProviderBatchCallAndSponsor, and IGateway. Added comprehensive type definitions with typed event namespaces and contract method signatures for ethers v6 compatibility.
Deployment Infrastructure
ignition/modules/*.ts, ignition/deployments/chain-*/...
Added Hardhat Ignition modules: Gateway.ts, GatewayDeployProxy.ts, GatewayUpgrade.ts, ProviderBatchCallAndSponsor.ts. Added deployment artifacts and journals for multiple chains (8453, 1, 1135, 137, 42161, 42220, 56) documenting contract deployments with transaction hashes and addresses.
Build Info & Compiler Artifacts
artifacts/build-info/solc-*.json
Added multiple Solc 0.8.20 build-info JSON files capturing compiler settings, embedded OpenZeppelin source contents (ECDSA, Math, SignedMath, Strings, Proxy patterns), and application contracts (Gateway, GatewaySettingManager, IGateway, ProviderBatchCallAndSponsor).
TypeScript Utilities & Common Types
types/ethers-contracts/common.ts, tsconfig.json, tasks/*.ts
Added comprehensive TypeScript typing layer for strongly-typed contract events, logs, listeners, and method interactions. Updated tsconfig.json from commonjs to es2020 module system with bundler moduleResolution. Added Hardhat tasks: accounts.ts (list signers and balances) and flatten.ts (flatten contracts with normalized headers).
Removed/Deleted Files
GatewayFlattened.sol, artifacts/.../OwnableUpgradeable.json
Deleted the complete GatewayFlattened.sol (1326 lines), removed numerous legacy debug artifact (.dbg.json) and compiled artifact files for OpenZeppelin dependencies, indicating transition to on-demand artifact generation and ESM-based build tooling.

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant Gateway
    participant TokenContract
    participant Treasury
    participant Provider

    Client->>Gateway: settleIn(orderId, token, amount, senderFeeRecipient, senderFee, recipient, rate)
    activate Gateway
    Gateway->>TokenContract: transferFrom(client, gateway, amount)
    activate TokenContract
    TokenContract-->>Gateway: success
    deactivate TokenContract
    
    Gateway->>Gateway: deduct senderFee from amount
    Gateway->>Treasury: transfer protocol/aggregator fees
    activate Treasury
    Treasury-->>Gateway: received
    deactivate Treasury
    
    Gateway->>Provider: transfer remaining amount
    activate Provider
    Provider-->>Gateway: received
    deactivate Provider
    
    Gateway->>Gateway: emit SettleIn event
    Gateway-->>Client: true
    deactivate Gateway
Loading
sequenceDiagram
    participant OffChainSigner
    participant ProviderBatchCallAndSponsor as Batch Contract
    participant TargetContract1
    participant TargetContract2

    OffChainSigner->>OffChainSigner: Build calls array + sign digest
    OffChainSigner->>ProviderBatchCallAndSponsor: execute(calls[], signature)
    activate ProviderBatchCallAndSponsor
    
    ProviderBatchCallAndSponsor->>ProviderBatchCallAndSponsor: Compute digest from nonce + calls
    ProviderBatchCallAndSponsor->>ProviderBatchCallAndSponsor: Recover signer via ECDSA
    
    alt Signer == address(this)
        ProviderBatchCallAndSponsor->>ProviderBatchCallAndSponsor: Increment nonce
        ProviderBatchCallAndSponsor->>TargetContract1: call(value, data)
        activate TargetContract1
        TargetContract1-->>ProviderBatchCallAndSponsor: success
        deactivate TargetContract1
        ProviderBatchCallAndSponsor->>ProviderBatchCallAndSponsor: emit CallExecuted
        
        ProviderBatchCallAndSponsor->>TargetContract2: call(value, data)
        activate TargetContract2
        TargetContract2-->>ProviderBatchCallAndSponsor: success
        deactivate TargetContract2
        ProviderBatchCallAndSponsor->>ProviderBatchCallAndSponsor: emit CallExecuted
        
        ProviderBatchCallAndSponsor->>ProviderBatchCallAndSponsor: emit BatchExecuted
        ProviderBatchCallAndSponsor-->>OffChainSigner: success
    else Invalid signer
        ProviderBatchCallAndSponsor-->>OffChainSigner: revert
    end
    deactivate ProviderBatchCallAndSponsor
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

  • feat: new fee structure #70: Modifies the same Gateway contract settlement architecture, fee-splitting functions, and event signatures; represents a related refactoring effort in the settlement flow.

Poem

🐰 A gateway splits its song,
settle-in and settle-out dance along,
Batches signed with ECDSA's might,
Hopping from v5 to v6 in the night,
TypeScript types now guard the way,
Escrow awaits a brighter day!

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch processOnrampSettlement

@onahprosper onahprosper self-assigned this Jan 24, 2026
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
contracts/Gateway.sol (1)

214-222: Event name mismatch: emits OrderSettled but interface declares SettleOut.

The interface has been updated to declare SettleOut event (line 41 in IGateway.sol), but the implementation still emits OrderSettled. This will cause a compilation error or event signature mismatch.

🐛 Proposed fix
 		// emit settled event
-		emit OrderSettled(
+		emit SettleOut(
 			_splitOrderId,
 			_orderId,
 			_liquidityProvider,
 			_settlePercent,
 			_rebatePercent
 		);
🤖 Fix all issues with AI agents
In `@contracts/Gateway.sol`:
- Line 238: The settleIn handler only checks order[_orderId].isFulfilled and can
overwrite an existing order created by createOrder; add a strict existence check
(e.g., require(order[_orderId].creator == address(0) || !order[_orderId].exists,
'OrderAlreadyExists')) before writing to the order mapping in settleIn to ensure
an orderId cannot be reused, and apply the same existence check to the analogous
settlement code path mentioned in the review (the other settlement block that
currently mirrors this logic) so neither path can overwrite orders from
createOrder.
- Around line 227-228: The NatSpec reference above function settleIn in
Gateway.sol is pointing to the wrong symbol ({processSettlement-IGateway});
update the doc to reference the correct interface member by changing the tag to
{settleIn-IGateway} (or the exact symbol name used in IGateway) so the NatSpec
link resolves to the interface method for settleIn.
- Line 237: The settleIn function is missing the onlyAggregator modifier,
allowing unauthorized callers to perform settlements; update the settleIn
function signature (the external whenNotPaused returns (bool) declaration for
settleIn) to include onlyAggregator just like settleOut and refund so only the
aggregator can invoke it, ensuring the same access control is applied.
- Around line 242-253: In settleIn, the contract only transfers _amount up front
but for local transfers (rate == 100) the function later calls
_handleLocalTransferFeeSplitting which expects the contract to hold _senderFee
as well; update the deposit so the contract pulls the sender fee for local
transfers by changing the initial IERC20(_token).transferFrom call to transfer
_amount + _senderFee when _rate == 100 (compute a requiredDeposit variable
before the transfer and use it), keep processedAmount -= _senderFee and
protocolFee logic intact, and ensure this mirrors createOrder's behavior so
subsequent local fee transfers in _handleLocalTransferFeeSplitting succeed.

In `@contracts/interfaces/IGateway.sol`:
- Around line 52-60: The SettleIn event declares an address parameter named
senderFee which is misleading; rename the parameter to senderFeeRecipient in the
SettleIn event declaration (and update any references) so it matches the emitted
argument in Gateway.sol and clarifies that the value is an address recipient,
not an amount; update the event signature and all usages that reference SettleIn
(including any emits) to use senderFeeRecipient.
- Around line 41-47: The implementation still emits the old OrderSettled event;
update the emission to use the interface's renamed event SettleOut with the
exact parameter order and indexed fields (bytes32 splitOrderId, bytes32 indexed
orderId, address indexed liquidityProvider, uint64 settlePercent, uint64
rebatePercent). Locate the emit call in Gateway.sol (the function that currently
does emit OrderSettled(...)) and replace it with emit SettleOut(...) passing the
same values so listeners of SettleOut receive the event.

In `@contracts/ProviderBatchCallAndSponsor.sol`:
- Around line 56-60: The current construction of encodedCalls via repeated
abi.encodePacked concatenation is both quadratic (repeated reallocations) and
unsafe for hashing because abi.encodePacked with dynamic bytes can produce
ambiguous concatenations; instead compute the digest with
keccak256(abi.encode(nonce, calls)) (or keccak256(abi.encode(nonce,
individualCallStructArray)) if calls is a struct array) and remove the manual
loop and encodedCalls variable. Replace the abi.encodePacked usage with
abi.encode when building the hash and use the calls array directly in keccak256
to eliminate collision risk and O(n²) behavior (referencing encodedCalls, calls,
nonce, and the keccak256 line).
🧹 Nitpick comments (4)
contracts/ProviderBatchCallAndSponsor.sol (3)

25-25: Hardcoded gateway address limits contract reusability.

The gatewayAddress is a compile-time constant, which means this contract cannot be reused across different deployments or networks where the Gateway contract has a different address.

Consider using an immutable variable set in the constructor or an upgradeable pattern if flexibility is needed:

♻️ Suggested refactor
-    address public constant gatewayAddress = 0x30F6A8457F8E42371E204a9c103f2Bd42341dD0F;
+    address public immutable gatewayAddress;
+
+    constructor(address _gatewayAddress) {
+        require(_gatewayAddress != address(0), "Invalid gateway");
+        gatewayAddress = _gatewayAddress;
+    }

71-78: Consider removing or documenting the stub function more clearly.

This overloaded execute function always reverts. If it's intentionally disabled, consider:

  1. Adding a more descriptive revert reason explaining why direct execution is not supported
  2. Or removing it entirely if not needed for interface compliance

Also, minor typo in the natspec: "doesnt" → "doesn't".


99-103: Original revert reason is lost on failed calls.

When a call fails, the original revert reason is discarded, making debugging difficult. Consider capturing and bubbling up the revert data:

♻️ Suggested improvement
     function _executeCall(Call calldata callItem) internal {
-        (bool success,) = callItem.to.call{value: callItem.value}(callItem.data);
-        require(success, "Call reverted");
+        (bool success, bytes memory returnData) = callItem.to.call{value: callItem.value}(callItem.data);
+        if (!success) {
+            if (returnData.length > 0) {
+                assembly {
+                    revert(add(returnData, 32), mload(returnData))
+                }
+            }
+            revert("Call reverted");
+        }
         emit CallExecuted(msg.sender, callItem.to, callItem.value, callItem.data);
     }
contracts/interfaces/IGateway.sol (1)

74-74: Consider whether all three parameters should be indexed.

All three parameters (orderId, sender, amount) are indexed. Having amount indexed is unusual since you typically filter events by order or sender, not exact amounts. This uses an extra topic slot unnecessarily.

♻️ Suggested change
-	event SenderFeeTransferred(bytes32 indexed orderId, address indexed sender, uint256 indexed amount);
+	event SenderFeeTransferred(bytes32 indexed orderId, address indexed sender, uint256 amount);

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
contracts/Gateway.sol (1)

156-163: Clarify whether settleOut should include whenNotPaused modifier.

settleOut is missing the whenNotPaused modifier while createOrder and settleIn both include it. If pause is intended to halt all settlements, add the modifier; if settleOut should remain operational during pause (e.g., to allow the aggregator to process pending orders), document this design choice.

🔧 Possible adjustment
-	) external onlyAggregator returns (bool) {
+	) external onlyAggregator whenNotPaused returns (bool) {
🤖 Fix all issues with AI agents
In `@contracts/Gateway.sol`:
- Around line 181-184: Update the misleading comment above the call to
_handleFxTransferFeeSplitting in Gateway.sol: replace “// fx transfer - sender
keeps all fee” with a concise note that reflects actual behavior (e.g., that
sender fee may be split and a portion can be routed to treasury/aggregator via
senderToAggregator logic). Reference the order[_orderId].senderFee and the
senderToAggregator path in the comment so readers know
_handleFxTransferFeeSplitting can allocate part of the sender fee to
treasury/aggregator rather than the sender keeping the entire fee.

In `@contracts/interfaces/IGateway.sol`:
- Around line 177-180: Update the NatSpec for the settleIn function in the
IGateway interface to clarify that it is intentionally permissionless for onramp
flows (callers supply tokens via transferFrom) rather than limited to an
"authorized aggregator"; explicitly state expected caller behavior, token
transfer pattern, and reference EIP-7702/onramp usage so integrators don’t
assume aggregator-only access and so implementers know to enforce any off-chain
aggregator checks elsewhere.
- Around line 182-186: Update the NatSpec for the _amount parameter to state
that it represents the total tokens transferred into the Gateway (i.e., the
gross transfer-in amount that includes sender fee and any protocol/FX fees)
rather than the net amount delivered to the recipient; apply this change to the
_amount `@param` in the comment for settleIn (and any related Gateway interface
comments) so integrators understand fees are deducted from _amount when
computing recipient and fee amounts.

Your Name added 2 commits January 26, 2026 14:20
…Out and implement settleIn for order processing
…ependencies and add license information for several packages
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
test/gateway/gateway.settleOrder.test.js (1)

1106-1112: Fix BigNumber arithmetic: use .add() instead of + operator.

MAX_BPS is a BigNumber, and using the + operator will cause JavaScript type coercion issues rather than performing proper BigNumber addition. This could result in unexpected behavior or test failures.

🐛 Proposed fix
-		const invalidRebatePercent = MAX_BPS + 1; // 100.001%
+		const invalidRebatePercent = MAX_BPS.add(1); // 100.001%
🤖 Fix all issues with AI agents
In `@contracts/ProviderBatchCallAndSponsor.sol`:
- Around line 29-31: The constructor sets gatewayAddress without validation, so
deploying with address(0) would break onlyAggregator checks; update the
constructor (constructor) to validate _gatewayAddress != address(0) using a
require that reverts with a clear message (e.g., "gatewayAddress cannot be
zero") before assigning gatewayAddress, ensuring onlyAggregator can function
correctly.

In `@test/utils/utils.manager.js`:
- Around line 29-43: The Events.Gateway.SenderFeeTransferred entry is unused in
tests; either add assertions for the contract's sender-fee emission in the
settle flow or remove the mapping key to avoid stale test utilities. Locate
test(s) that call the contract's settle function (the contract method settle)
and add an assertion that the transaction emits the SenderFeeTransferred event
(using your test framework's to.emit/expect pattern and the
Events.Gateway.SenderFeeTransferred symbol with the expected args), or simply
delete Events.Gateway.SenderFeeTransferred from test/utils/utils.manager.js if
you prefer not to cover it.
♻️ Duplicate comments (2)
contracts/ProviderBatchCallAndSponsor.sol (1)

60-65: Harden batch signature digest (domain separation + collision resistance).

The digest isn’t bound to chain/contract and uses abi.encodePacked over dynamic bytes, which can collide and is O(n²). Consider hashing the structured data with explicit domain separation.

🔒 Proposed fix
-        bytes memory encodedCalls;
-        for (uint256 i = 0; i < calls.length; i++) {
-            encodedCalls = abi.encodePacked(encodedCalls, calls[i].to, calls[i].value, calls[i].data);
-        }
-        bytes32 digest = keccak256(abi.encodePacked(nonce, encodedCalls));
+        bytes32 digest = keccak256(abi.encode(block.chainid, address(this), nonce, calls));
Solidity abi.encodePacked dynamic bytes hash collision and recommended abi.encode for hashing
contracts/Gateway.sol (1)

181-184: Update FX fee-split comments to match behavior.

_handleFxTransferFeeSplitting does split the sender fee; the comments say the sender keeps all fees.

📝 Suggested edit
-                // fx transfer - sender keeps all fee
+                // fx transfer - split sender fee per senderToAggregator settings
...
-                // FX transfer - sender keeps all fee
+                // FX transfer - split sender fee per senderToAggregator settings

Also applies to: 287-288

🧹 Nitpick comments (4)
artifacts/@openzeppelin/contracts/utils/Strings.sol/Strings.json (1)

1-10: Add artifacts/ to .gitignore to exclude auto-generated build files.

The artifacts/ directory currently appears in version control with 41 tracked files. These are auto-generated Hardhat artifacts that can be regenerated via npx hardhat compile. Committing them bloats the repository and causes unnecessary diffs. The .gitignore file has a comment on line 13 but the actual exclusion pattern is missing.

Add artifacts/ as a proper entry in .gitignore to prevent these files from being tracked.

contracts/Gateway.sol (1)

242-242: Clarify that _amount already includes sender fee.

A short inline note here would prevent confusion with createOrder, which transfers _amount + _senderFee.

📝 Suggested edit
-        IERC20(_token).transferFrom(msg.sender, address(this), _amount);
+        // _amount already includes sender fee; fee is split later
+        IERC20(_token).transferFrom(msg.sender, address(this), _amount);

Based on learnings, this reflects the intended off-chain computation of _amount including _senderFee.

test/gateway/gateway.settleOrder.test.js (2)

1315-1343: Remove unused variable.

The ret variable from getSupportedInstitutions() is fetched but not used in this test case. The test only needs to verify the revert condition.

♻️ Proposed cleanup
 		it("Should revert when settleIn is called with zero sender fee for local transfer", async function () {
-			const ret = await getSupportedInstitutions();
 			const orderId = ethers.utils.keccak256(ethers.utils.defaultAbiCoder.encode(["string"], ["test-order-4"]));
 			const rate = 100; // Local transfer requires sender fee
 			const data = [{ bank_account: "09090990903" }];

1437-1444: Consider moving settings reset to afterEach for test robustness.

If this test fails before reaching the reset block (lines 1437-1444), subsequent tests may run with incorrect token fee settings. Moving the reset to an afterEach hook ensures cleanup regardless of test outcome.

♻️ Alternative: Use afterEach for cleanup
// Inside describe("settleIn", function () {
afterEach(async function () {
  // Ensure token settings are reset after each test
  await gateway.connect(this.deployer).setTokenFeeSettings(
    mockUSDT.address,
    50000,  // senderToProvider
    50000,  // providerToAggregator
    0,      // senderToAggregator
    500     // providerToAggregatorFx
  );
});

Then remove the manual reset at lines 1437-1444.

Copy link
Copy Markdown
Contributor

@chibie chibie left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in addition,

processedAmount should be senderAmount

…tocol fees

- Changed the data structure in IGateway interface to replace senderFeeRecipient with aggregatorFee.
- Updated Gateway contract logic to calculate and transfer aggregator fees instead of protocol fees.
- Modified settleOrder tests to reflect changes in fee calculations and ensure correct amounts are transferred.
- Removed unused event SenderFeeTransferred from Events.
…regator address retrieval

- Renamed variables for clarity in fee calculations, changing `senderAmount` to `amountToSettle`.
- Updated logic to correctly handle aggregator fees during order settlements.
- Added a new function `getAggregator` in the IGateway interface to retrieve the aggregator address.
- Adjusted event emissions to reflect changes in fee handling.
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI agents
In `@contracts/Gateway.sol`:
- Around line 227-272: The settleIn function treats _amount as net instead of
the documented gross (it currently calls IERC20.transferFrom(msg.sender,
address(this), _amount + _senderFee) and sends full _amount to the recipient
while also handling _senderFee), causing double‑charging and overpayment; fix
by: change the incoming pull to IERC20.transferFrom(..., _amount) only, compute
amountToSettle = _amount - _senderFee (and then subtract any aggregatorFee for
FX flows), send aggregatorFee to treasury if applicable, update
order[_orderId].amount to the net amountToSettle, and ensure
IERC20.transfer(_recipient, amountToSettle) and order fields (senderFee,
protocolFee) reflect these net/gross semantics in settleIn.

In `@contracts/interfaces/IGateway.sol`:
- Around line 176-197: The docs/implementation mismatch for settleIn must be
reconciled by treating _amount as the gross amount (includes sender fee)
everywhere: update Gateway.settleIn so it transfersFrom only the gross _amount
(not _amount + _senderFee), compute recipientAmount = _amount - _senderFee (and
subtract any protocol fee if applicable), send recipientAmount to _recipient,
and send _senderFee to _senderFeeRecipient; ensure function signature and
NatSpec remain consistent with this gross semantics and adjust any internal
references that currently assume _amount is net.

…nt after deducting all fees

- Adjusted transfer logic to handle gross amount and subtract sender and protocol fees.
- Updated variable names for clarity, changing `senderAmount` to `recipientAmount`.
- Ensured that the correct amount is transferred to the recipient after all fees are deducted.
- Emitted settlement event with the updated recipient amount.
- Updated index files to export MockUSDT and its factory.
- Modified IGateway interface to include changes in event parameters for SettleIn.
- Updated hardhat types to include new contract factories and deployment methods for ProxyAdmin and TransparentUpgradeableProxy.
- Refactored existing index files to organize exports for better clarity and maintainability.
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 12

🤖 Fix all issues with AI agents
In
`@artifacts/build-info/solc-0_8_20-32286d087051e35025cad535aaaac947e68287bf.json`:
- Around line 48-49: The contract in the artifact contains inconsistent naming:
rename contract MockUSDT to MockUSDC, fix the ERC20 name in the constructor from
"MockUDSC" to "MockUSDC" (keep or adjust the symbol "MUSDC" as desired), and
update any string literals or require messages that reference "MockUSDT" (e.g.,
in burnAll) to "MockUSDC"; ensure the constructor, mint, burn, and burnAll
functions still compile after renaming and then regenerate the build artifacts.

In
`@artifacts/build-info/solc-0_8_20-fca8cc730a3cef896a4a57d6ce6a72d2a65b5d82.json`:
- Line 49: The digest construction in execute() currently builds encodedCalls
via a manual loop using abi.encodePacked which can cause collisions; replace
that logic by computing the digest with abi.encode to include length prefixes
(i.e., remove the encodedCalls loop and set bytes32 digest =
keccak256(abi.encode(nonce, calls))); update the execute(Call[] calldata calls,
bytes calldata signature) function to use this new digest computation
(references: execute, encodedCalls, digest) and remove the abi.encodePacked
accumulation to prevent replayable signature collisions.

In `@artifacts/contracts/ProviderBatchCallAndSponsor.sol/artifacts.d.ts`:
- Around line 7-42: Rename the locally declared interface IGateway to a unique
name (e.g., IAggregatorGateway) and update all references: change the interface
declaration symbol and the ArtifactMap entries in artifacts.d.ts from IGateway
to IAggregatorGateway, and update usages in the contract code such as the
onlyAggregator modifier, the gatewayAddress getter, and any cast/ABI usage that
references IGateway so they now reference IAggregatorGateway; ensure the ABI
entry name (getAggregator) and any import/fully‑qualified names (e.g.,
"contracts/ProviderBatchCallAndSponsor.sol:IGateway") are replaced accordingly
to avoid the duplicate ArtifactMap key.

In `@contracts/ProviderBatchCallAndSponsor.sol`:
- Around line 25-27: The comment above the gatewayAddress constant has a typo
("eao"); update the comment to use the correct acronym "EOA" (Externally Owned
Account) so the comment reads e.g. "constant because when it assigned to EOA we
want to ensure it never changes" or similar; locate the comment immediately
preceding the address public constant gatewayAddress and replace "eao" with
"EOA".

In `@hardhat.config.ts`:
- Around line 118-148: The chainDescriptors entries for chains 42220 (celo),
534352 (scroll), and 1135 (lisk) have swapped fields: swap the values of url and
apiUrl in each object's blockExplorers.etherscan so that url is the human-facing
explorer website and apiUrl is the API endpoint used for verification (update
the objects under chainDescriptors for 42220, 534352, and 1135 accordingly).

In `@ignition/modules/GatewayDeployProxy.ts`:
- Around line 10-14: The proxy is being constructed with empty initialization
data ("0x"), leaving the implementation uninitialized; encode and pass
Gateway.initialize(...) as the initializer when creating
TransparentUpgradeableProxy. Use the gateway contract's initialize function
selector/encoded call (as shown in the test fixture) and replace "0x" in the
m.contract("TransparentUpgradeableProxy", [gateway, proxyAdminOwner, "0x"]) call
with the ABI-encoded Gateway.initialize(...) payload so the proxy delegates the
required MAX_BPS, Ownable2Step and Pausable setup to the implementation during
deployment.

In `@ignition/modules/GatewayUpgrade.ts`:
- Around line 34-38: The current call m.staticCall(proxy, "admin", [],
"getProxyAdmin") will revert for non-admin callers because
ITransparentUpgradeableProxy.admin() is ifAdmin-gated; replace this by either
accepting the ProxyAdmin address from configuration/environment and using that
value to set proxyAdminAddress, or read the EIP‑1967 admin storage slot directly
(compute ADMIN_SLOT = bytes32(uint256(keccak256('eip1967.proxy.admin')) - 1) and
read storage at that slot for existingProxyAddress) and convert the returned
slot value to an address; update references to proxyAdminAddress and remove the
m.staticCall to ITransparentUpgradeableProxy.admin so the upgrade flow no longer
depends on calling admin() as a non-admin.

In `@package.json`:
- Around line 19-37: Two test files (test/gateway/gateway.settleOrder.test.js
and test/gateway/gateway.createorder.test.js) still use CommonJS require() which
breaks under "type":"module"; change each require(...) to ES module import
statements (use named or default import form matching the original
destructuring), update any module.exports in code they import to export/exports
(or convert exported helpers to export default/named) and adjust references
accordingly, and handle CommonJS globals if present (replace
__dirname/__filename uses with import.meta.url or path utilities). Locate
require calls in those two files and replace them with appropriate import lines
and fix any exported helper modules to use export syntax so tests run under ESM.

In `@test/fixtures/mockUSDT.js`:
- Around line 7-8: The deployment result from ethers.deployContract("MockUSDT")
is returned before the transaction is mined; after obtaining mockUSDT, await
mockUSDT.waitForDeployment() before calling mockUSDT.getAddress() so the
deployment transaction is confirmed and avoids race conditions in tests.

In `@test/utils/utils.manager.js`:
- Around line 76-79: The helper mockMintDeposit uses the deprecated
gateway.address (ethers v6); change it to await gateway.getAddress() and pass
that result to usdc.connect(account).approve(...). Specifically, inside
mockMintDeposit ensure you call const gatewayAddr = await gateway.getAddress()
(or inline await gateway.getAddress()) and use gatewayAddr when calling
usdc.connect(account).approve(...), keeping the function async and preserving
the mint call.
- Around line 56-74: The code in getSupportedInstitutions uses the deprecated
ethers.utils.formatBytes32String; replace those calls with
ethers.encodeBytes32String for currency, accessBank.code/name and
diamondBank.code/name to be compatible with ethers v6 (update any imports if
needed so ethers is the v6 API). Ensure you update all occurrences in
getSupportedInstitutions to call ethers.encodeBytes32String("...") for the
currency, accessBank.code, accessBank.name, diamondBank.code, and
diamondBank.name.

In `@types/ethers-contracts/common.ts`:
- Around line 44-49: The deploy signature on MinEthersFactory is wrong: change
deploy(...a: ARGS[]) to accept the argument tuple rather than an array of tuples
(e.g., deploy(...a: ARGS) or deploy(...args: ARGS)) so the factory deploy takes
the normal variadic tuple parameters; update MinEthersFactory's deploy signature
accordingly and keep GetContractTypeFromFactory and GetARGsTypeFromFactory
(which uses Parameters<F['deploy']>) intact so they resolve the correct tuple
types.
🧹 Nitpick comments (2)
hardhat.config.ts (1)

86-110: Inconsistent optimizer settings across compiler versions.

The optimizer is disabled for versions 0.8.18 and 0.8.9 but enabled for 0.8.20. If this is intentional (e.g., only optimizing production contracts compiled with 0.8.20), consider adding a comment to clarify. Otherwise, this inconsistency could lead to unexpected gas costs or behavior differences.

contracts/ProviderBatchCallAndSponsor.sol (1)

101-105: Consider bubbling revert reasons from failed calls.

When a call fails, the current implementation reverts with a generic "Call reverted" message. This makes debugging difficult since the actual revert reason from the target contract is lost.

♻️ Proposed fix to bubble revert reasons
 function _executeCall(Call calldata callItem) internal {
     (bool success, bytes memory returnData) = callItem.to.call{value: callItem.value}(callItem.data);
-    require(success, "Call reverted");
+    if (!success) {
+        if (returnData.length > 0) {
+            assembly {
+                revert(add(32, returnData), mload(returnData))
+            }
+        } else {
+            revert("Call reverted");
+        }
+    }
     emit CallExecuted(msg.sender, callItem.to, callItem.value, callItem.data);
 }

Comment on lines +48 to +49
"project/contracts/mocks/MockUSDC.sol": {
"content": "//SPDX-License-Identifier: UNLICENSED\npragma solidity ^0.8.18;\n\nimport {ERC20} from \"@openzeppelin/contracts/token/ERC20/ERC20.sol\";\n\n/// @dev Mock mintable USDC\ncontract MockUSDT is ERC20 {\n constructor() ERC20(\"MockUDSC\", \"MUSDC\") {\n _mint(msg.sender, 1_000_000E18);\n }\n\n function mint(uint256 _amount) external {\n _mint(msg.sender, _amount);\n }\n\n function burn(uint256 _amount) external {\n _burn(msg.sender, _amount);\n }\n\n function burnAll() external {\n uint256 _balanceOf = balanceOf(msg.sender);\n require(_balanceOf > 0, \"MockUSDT: Nothing to burn\");\n _burn(msg.sender, _balanceOf);\n }\n}\n"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash

# Find and inspect the actual MockUSDC.sol file
echo "=== Searching for MockUSDC.sol source file ==="
fd -a 'MockUSDC.sol' --type f

# Also check if MockUSDT.sol exists
echo -e "\n=== Searching for MockUSDT.sol source file ==="
fd -a 'MockUSDT.sol' --type f

# Check for any mock token files in the contracts/mocks directory
echo -e "\n=== Looking for mock token files ==="
fd -a 'Mock.*\.sol' --type f | grep -i 'mock\|usdc\|usdt'

Repository: paycrest/contracts

Length of output: 290


🏁 Script executed:

#!/bin/bash

# If MockUSDC.sol exists, show its content
echo "=== Checking MockUSDC.sol content ==="
if [ -f "project/contracts/mocks/MockUSDC.sol" ]; then
    cat -n "project/contracts/mocks/MockUSDC.sol"
else
    echo "File not found at project/contracts/mocks/MockUSDC.sol"
fi

# Search for references to MockUSDC and MockUSDT in the codebase
echo -e "\n=== Searching for MockUSDC references ==="
rg 'MockUSDC' --type solidity --max-count 20

echo -e "\n=== Searching for MockUSDT references ==="
rg 'MockUSDT' --type solidity --max-count 20

Repository: paycrest/contracts

Length of output: 323


🏁 Script executed:

#!/bin/bash

# Read the actual MockUSDC.sol file from the correct path
echo "=== Content of MockUSDC.sol ==="
find . -name 'MockUSDC.sol' -type f -exec cat -n {} \;

# Search for MockUSDC and MockUSDT references
echo -e "\n=== Searching for MockUSDC references ==="
rg 'MockUSDC' --type sol

echo -e "\n=== Searching for MockUSDT references ==="
rg 'MockUSDT' --type sol

Repository: paycrest/contracts

Length of output: 1086


🏁 Script executed:

#!/bin/bash

# Search for any imports or references to these mock tokens
echo "=== Searching for imports of MockUSDC ==="
rg 'MockUSDC' -i

echo -e "\n=== Searching for imports of MockUSDT ==="
rg 'MockUSDT' -i

echo -e "\n=== Checking for other mock token files ==="
fd -a 'Mock' --type f | grep -E '\.sol$'

Repository: paycrest/contracts

Length of output: 50376


Fix MockUSDC vs MockUSDT naming inconsistencies in the source contract.

The contracts/mocks/MockUSDC.sol file declares contract MockUSDT, initializes the ERC20 with token name "MockUDSC" (typo), and references "MockUSDT" in error messages. The file comment indicates USDC intent. Align the contract name, token name (fix typo to "MockUSDC"), and error messages with the filename and intended token type, then regenerate artifacts.

🤖 Prompt for AI Agents
In
`@artifacts/build-info/solc-0_8_20-32286d087051e35025cad535aaaac947e68287bf.json`
around lines 48 - 49, The contract in the artifact contains inconsistent naming:
rename contract MockUSDT to MockUSDC, fix the ERC20 name in the constructor from
"MockUDSC" to "MockUSDC" (keep or adjust the symbol "MUSDC" as desired), and
update any string literals or require messages that reference "MockUSDT" (e.g.,
in burnAll) to "MockUSDC"; ensure the constructor, mint, burn, and burnAll
functions still compile after renaming and then regenerate the build artifacts.

"content": "// SPDX-License-Identifier: MIT\n// OpenZeppelin Contracts (last updated v4.9.0) (utils/Strings.sol)\n\npragma solidity ^0.8.0;\n\nimport \"./math/Math.sol\";\nimport \"./math/SignedMath.sol\";\n\n/**\n * @dev String operations.\n */\nlibrary Strings {\n bytes16 private constant _SYMBOLS = \"0123456789abcdef\";\n uint8 private constant _ADDRESS_LENGTH = 20;\n\n /**\n * @dev Converts a `uint256` to its ASCII `string` decimal representation.\n */\n function toString(uint256 value) internal pure returns (string memory) {\n unchecked {\n uint256 length = Math.log10(value) + 1;\n string memory buffer = new string(length);\n uint256 ptr;\n /// @solidity memory-safe-assembly\n assembly {\n ptr := add(buffer, add(32, length))\n }\n while (true) {\n ptr--;\n /// @solidity memory-safe-assembly\n assembly {\n mstore8(ptr, byte(mod(value, 10), _SYMBOLS))\n }\n value /= 10;\n if (value == 0) break;\n }\n return buffer;\n }\n }\n\n /**\n * @dev Converts a `int256` to its ASCII `string` decimal representation.\n */\n function toString(int256 value) internal pure returns (string memory) {\n return string(abi.encodePacked(value < 0 ? \"-\" : \"\", toString(SignedMath.abs(value))));\n }\n\n /**\n * @dev Converts a `uint256` to its ASCII `string` hexadecimal representation.\n */\n function toHexString(uint256 value) internal pure returns (string memory) {\n unchecked {\n return toHexString(value, Math.log256(value) + 1);\n }\n }\n\n /**\n * @dev Converts a `uint256` to its ASCII `string` hexadecimal representation with fixed length.\n */\n function toHexString(uint256 value, uint256 length) internal pure returns (string memory) {\n bytes memory buffer = new bytes(2 * length + 2);\n buffer[0] = \"0\";\n buffer[1] = \"x\";\n for (uint256 i = 2 * length + 1; i > 1; --i) {\n buffer[i] = _SYMBOLS[value & 0xf];\n value >>= 4;\n }\n require(value == 0, \"Strings: hex length insufficient\");\n return string(buffer);\n }\n\n /**\n * @dev Converts an `address` with fixed length of 20 bytes to its not checksummed ASCII `string` hexadecimal representation.\n */\n function toHexString(address addr) internal pure returns (string memory) {\n return toHexString(uint256(uint160(addr)), _ADDRESS_LENGTH);\n }\n\n /**\n * @dev Returns true if the two strings are equal.\n */\n function equal(string memory a, string memory b) internal pure returns (bool) {\n return keccak256(bytes(a)) == keccak256(bytes(b));\n }\n}\n"
},
"project/contracts/ProviderBatchCallAndSponsor.sol": {
"content": "// SPDX-License-Identifier: MIT\npragma solidity ^0.8.18;\n\nimport \"@openzeppelin/contracts/utils/cryptography/ECDSA.sol\";\n\ninterface IGateway {\n function getAggregator() external view returns (address);\n}\n/**\n * @title ProviderBatchCallAndSponsor\n *\n * When an EOA upgrades via EIP‑7702, it delegates to this implementation.\n * Off‑chain, the account signs a message authorizing a batch of calls. The message is the hash of:\n * keccak256(abi.encodePacked(nonce, calls))\n * The signature must be generated with the EOA’s private key so that, once upgraded, the recovered signer equals the account’s own address (i.e. address(this)).\n *\n * This contract provides just one way to execute a batch:\n * 1. With a signature: Any sponsor can submit the batch if it carries a valid signature.\n *\n * Replay protection is achieved by using a nonce that is included in the signed message.\n */\ncontract ProviderBatchCallAndSponsor {\n using ECDSA for bytes32;\n \n // constant because when it assigned to eao we want to ensure it never changes\n /// @notice The address of the Gateway contract.\n address public constant gatewayAddress = 0x56dA8fCE8FD64CaaE90D80DED55587b282bb4303;\n /// @notice A nonce used for replay protection.\n uint256 public nonce;\n\n /// @notice Represents a single call within a batch.\n struct Call {\n address to;\n uint256 value;\n bytes data;\n }\n\n modifier onlyAggregator() {\n\t\trequire(msg.sender == IGateway(gatewayAddress).getAggregator(), \"OnlyAggregator\");\n\t\t_;\n\t}\n\n /// @notice Emitted for every individual call executed.\n event CallExecuted(address indexed sender, address indexed to, uint256 value, bytes data);\n /// @notice Emitted when a full batch is executed.\n event BatchExecuted(uint256 indexed nonce, Call[] calls);\n\n /**\n * @notice Executes a batch of calls using an off–chain signature.\n * @param calls An array of Call structs containing destination, ETH value, and calldata.\n * @param signature The ECDSA signature over the current nonce and the call data.\n *\n * The signature must be produced off–chain by signing:\n * The signing key should be the account’s key (which becomes the smart account’s own identity after upgrade).\n */\n function execute(Call[] calldata calls, bytes calldata signature) external payable onlyAggregator {\n // Compute the digest that the account was expected to sign.\n bytes memory encodedCalls;\n for (uint256 i = 0; i < calls.length; i++) {\n encodedCalls = abi.encodePacked(encodedCalls, calls[i].to, calls[i].value, calls[i].data);\n }\n bytes32 digest = keccak256(abi.encodePacked(nonce, encodedCalls));\n \n bytes32 ethSignedMessageHash = ECDSA.toEthSignedMessageHash(digest);\n\n // Recover the signer from the provided signature.\n address recovered = ECDSA.recover(ethSignedMessageHash, signature);\n require(recovered == address(this), \"Invalid signature\");\n\n _executeBatch(calls);\n }\n\n /**\n * @notice Executes a batch of calls directly.\n * @dev This contract doesnt authorized self execution.\n * @param calls An array of Call structs containing destination, ETH value, and calldata.\n */\n function execute(Call[] calldata calls) external payable {\n revert(\"Not implemented\"); // we don't expect this to be called directly\n }\n\n /**\n * @dev Internal function that handles batch execution and nonce incrementation.\n * @param calls An array of Call structs.\n */\n function _executeBatch(Call[] calldata calls) internal {\n uint256 currentNonce = nonce;\n nonce++; // Increment nonce to protect against replay attacks\n\n for (uint256 i = 0; i < calls.length; i++) {\n _executeCall(calls[i]);\n }\n\n emit BatchExecuted(currentNonce, calls);\n }\n\n /**\n * @dev Internal function to execute a single call.\n * @param callItem The Call struct containing destination, value, and calldata.\n */\n function _executeCall(Call calldata callItem) internal {\n (bool success,) = callItem.to.call{value: callItem.value}(callItem.data);\n require(success, \"Call reverted\");\n emit CallExecuted(msg.sender, callItem.to, callItem.value, callItem.data);\n }\n\n // Allow the contract to receive ETH (e.g. from DEX swaps or other transfers).\n fallback() external payable {}\n receive() external payable {}\n}"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🌐 Web query:

Solidity abi.encodePacked dynamic types collision vulnerability signature

💡 Result:

abi.encodePacked(...) uses Solidity’s non-standard packed ABI encoding: it concatenates values tightly, and for dynamic types (string, bytes, dynamic arrays) it encodes the data in-place without a length field. That makes the byte stream ambiguous as soon as you pack 2+ dynamic values, so different inputs can produce identical bytes, and therefore identical keccak256 hashes/signature digests. This is the “abi.encodePacked collision” vulnerability. [1]

Why this breaks signature security

If your signature scheme hashes packed user-controlled fields, e.g.

bytes32 digest = keccak256(abi.encodePacked(a, b)); // a,b are dynamic

an attacker can find (a, b) and (a', b') such that:

abi.encodePacked(a, b) == abi.encodePacked(a', b')

So a signature intended for (a, b) will also verify for (a', b') (same digest), enabling authorization bypass / replay on different parameters. This is explicitly warned about in the Solidity ABI spec; classic example:
abi.encodePacked("a","bc") == abi.encodePacked("ab","c"). [1]

When collisions happen

  • High risk: 2+ dynamic types in the same abi.encodePacked(...) (e.g., string,string; bytes,bytes; string,bytes; dynamic arrays). [1][2]
  • Also risky: mixing types where boundaries can be ambiguous without padding/length (packed encoding is not self-delimiting). [1]

How to fix / safe patterns

  1. Prefer abi.encode(...) for anything you hash for signatures/authentication, because it includes offsets/lengths for dynamic data (unambiguous). [1]
  2. If you must use packed encoding, ensure at most one dynamic argument, or add explicit domain separators / lengths, e.g. hash each dynamic piece first:
    bytes32 digest = keccak256(abi.encode(
      TYPEHASH,
      keccak256(bytes(a)),
      keccak256(bytes(b)),
      otherStaticFields
    ));
  3. Use EIP-712 typed structured data style encodings (typed, length-safe) rather than ad-hoc encodePacked digests (same core idea as #2). (General best practice; the key point is “typed + unambiguous”.)

Rule of thumb: never sign/verify keccak256(abi.encodePacked(...)) when multiple packed inputs are dynamic or user-controlled. [1][3]

Sources

[1] Solidity ABI spec — “Non-standard Packed Mode” warning about ambiguity/collisions with multiple dynamic types (and the "a","bc" vs "ab","c" example).
[2] MetaMask encodePacked docs noting ambiguity with more than one dynamic type.
[3] Practical vulnerability write-ups illustrating collision scenarios and impact on hashes/signatures.


🌐 Web query:

Solidity abi.encode vs abi.encodePacked signed messages best practices

💡 Result:

Key difference (why it matters for signatures)

  • abi.encode(...) is standard ABI encoding (32-byte aligned, includes offsets/lengths for dynamic types). It’s unambiguous, so different inputs won’t “accidentally” produce the same byte payload.
  • abi.encodePacked(...) is non-standard packed encoding (tight concatenation; dynamic types are encoded without length), so combining multiple dynamic / variable-length values can be ambiguous and lead to hash collisions (same keccak256 for different inputs). This is especially dangerous for signed messages / authorization. [1][2][3]

Best practices for “signed message” hashes

  1. Prefer EIP-712 for anything security/authorization-related (permits, meta-tx, allowlists, orders, etc.). It gives you:

    • Domain separation (chainId, verifying contract, name/version) to prevent cross-contract / cross-chain replay.
    • A standardized, wallet-supported typed-data flow (eth_signTypedData*). [4][5]
      In Solidity, the struct hash is typically built with keccak256(abi.encode(...)) (not packed). [5]
  2. If you must use “personal_sign” / EIP-191 style signing, hash your structured parameters with:

    • bytes32 messageHash = keccak256(abi.encode(...)); (safe encoding)
    • then apply the EIP-191 prefix ("\x19Ethereum Signed Message:\n32") and hash again.
      Using packed encoding is appropriate for the fixed, known prefix + 32-byte hash case, because there’s no ambiguity there. [1]
  3. Avoid keccak256(abi.encodePacked(...)) when:

    • there are two or more dynamic types (string, bytes, dynamic arrays), or
    • user-controlled variable-length values are concatenated for an auth/signature check. [1][2][3]
  4. If you insist on packed encoding, constrain it so it’s unambiguous:

    • ensure at most one dynamic argument, or
    • hash each dynamic field first (keccak256(bytes(str))) and pack only fixed-size values (e.g., bytes32, uint256, address). [3][5]
  5. Always include replay protection in the signed payload:

    • at minimum: nonce + deadline/expiry
    • and ideally also: chainId + verifyingContract (EIP-712 domain does this for you). [4][5]

Rule of thumb:

  • For signatures/authorization: keccak256(abi.encode(...)) (or EIP-712).
  • Use abi.encodePacked only for carefully bounded cases (fixed-size fields, or prefixing), not for “concatenate arbitrary user inputs and sign”.

Sources: Solidity ABI packed-mode ambiguity and dynamic-type issue [1], SWC-133 guidance for signature contexts [2], security guidance to avoid packed encoding when hashing [3], EIP-712 standard [4], OpenZeppelin EIP712 docs recommending abi.encode + keccak256 for typed data [5].


Replace abi.encodePacked with abi.encode for signed call digest construction.

abi.encodePacked concatenates dynamic bytes without length prefixes. When multiple calls are packed in a loop, different call arrays can produce identical digests, allowing signatures to be replayed across different batches. abi.encode includes length fields and prevents collision.

Safer digest construction
-        bytes memory encodedCalls;
-        for (uint256 i = 0; i < calls.length; i++) {
-            encodedCalls = abi.encodePacked(encodedCalls, calls[i].to, calls[i].value, calls[i].data);
-        }
-        bytes32 digest = keccak256(abi.encodePacked(nonce, encodedCalls));
+        bytes32 digest = keccak256(abi.encode(nonce, calls));
🤖 Prompt for AI Agents
In
`@artifacts/build-info/solc-0_8_20-fca8cc730a3cef896a4a57d6ce6a72d2a65b5d82.json`
at line 49, The digest construction in execute() currently builds encodedCalls
via a manual loop using abi.encodePacked which can cause collisions; replace
that logic by computing the digest with abi.encode to include length prefixes
(i.e., remove the encodedCalls loop and set bytes32 digest =
keccak256(abi.encode(nonce, calls))); update the execute(Call[] calldata calls,
bytes calldata signature) function to use this new digest computation
(references: execute, encodedCalls, digest) and remove the abi.encodePacked
accumulation to prevent replayable signature collisions.

Comment on lines +25 to +27
// constant because when it assigned to eao we want to ensure it never changes
/// @notice The address of the Gateway contract.
address public constant gatewayAddress = 0x56dA8fCE8FD64CaaE90D80DED55587b282bb4303;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Minor typo in comment.

Line 25: "eao" should be "EOA" (Externally Owned Account).

📝 Proposed fix
-    // constant because when it assigned to eao we want to ensure it never changes
+    // constant because when it is assigned to an EOA we want to ensure it never changes
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// constant because when it assigned to eao we want to ensure it never changes
/// @notice The address of the Gateway contract.
address public constant gatewayAddress = 0x56dA8fCE8FD64CaaE90D80DED55587b282bb4303;
// constant because when it is assigned to an EOA we want to ensure it never changes
/// `@notice` The address of the Gateway contract.
address public constant gatewayAddress = 0x56dA8fCE8FD64CaaE90D80DED55587b282bb4303;
🤖 Prompt for AI Agents
In `@contracts/ProviderBatchCallAndSponsor.sol` around lines 25 - 27, The comment
above the gatewayAddress constant has a typo ("eao"); update the comment to use
the correct acronym "EOA" (Externally Owned Account) so the comment reads e.g.
"constant because when it assigned to EOA we want to ensure it never changes" or
similar; locate the comment immediately preceding the address public constant
gatewayAddress and replace "eao" with "EOA".

Comment on lines +118 to +148
chainDescriptors: {
42220: {
name: "celo",
blockExplorers: {
etherscan: {
name: "celoscan",
url: "https://api.celoscan.io/api",
apiUrl: "https://celoscan.io/",
},
},
{
network: "amoy",
chainId: 80002,
urls: {
apiURL: "https://api-amoy.polygonscan.com/api",
browserURL: "https://amoy.polygonscan.com",
},
534352: {
name: "scroll",
blockExplorers: {
etherscan: {
name: "Scroll Explorer",
url: "https://api.scrollscan.com/api",
apiUrl: "https://scrollscan.com/",
},
},
{
network: "celo",
chainId: 42220,
urls: {
apiURL: "https://api.celoscan.io/api",
browserURL: "https://celoscan.io/",
},
1135: {
name: "lisk",
blockExplorers: {
etherscan: {
name: "Lisk Explorer",
url: "https://explorer.lisk.com/api",
apiUrl: "https://explorer.lisk.com",
},
},
],
},
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🌐 Web query:

Hardhat v3 chainDescriptors etherscan url apiUrl configuration

💡 Result:

In Hardhat v3, the Etherscan verification endpoint configuration for custom (non-natively-supported) chains is done via chainDescriptorsblockExplorers.etherscan, where you set both the human-facing explorer URL and the API base URL:

// hardhat.config.ts
import { defineConfig } from "hardhat/config";
import hardhatVerify from "@nomicfoundation/hardhat-verify";

export default defineConfig({
  plugins: [hardhatVerify],

  verify: {
    etherscan: {
      apiKey: process.env.ETHERSCAN_API_KEY!, // or configVariable(...)
    },
  },

  chainDescriptors: {
    123456: {
      name: "ExampleChain",
      blockExplorers: {
        etherscan: {
          name: "ExampleScan",
          url: "https://example.scan",         // explorer website
          apiUrl: "https://example.scan/api",  // explorer API base used for verification
        },
      },
    },
  },
});
  • url: the block explorer website base URL. (hardhat.org)
  • apiUrl: the block explorer API base URL (the one Hardhat uses to submit verification requests). (hardhat.org)
  • The API key is configured under verify.etherscan.apiKey. (hardhat.org)

Citations:


🌐 Web query:

Celoscan API endpoint documentation vs web explorer URL

💡 Result:

Celoscan’s web explorer URL and its API base URL are different things:

  • Web explorer (human UI):

    • Celo Mainnet: https://celoscan.io/
    • Celo Sepolia: https://sepolia.celoscan.io/ (listed as official explorers in Celo’s network overview) [3]
  • API endpoint (programmatic): Celoscan’s “Developer API” docs currently point to Etherscan API V2, where you call a single multichain endpoint and specify Celo via chainid. [1]

    • Base: https://api.etherscan.io/v2/api
    • Celo Mainnet uses chainid=42220, Celo Sepolia uses chainid=11142220. [1]
    • Example (block number):
      https://api.etherscan.io/v2/api?chainid=42220&module=proxy&action=eth_blockNumber&apikey=YOUR_KEY [2]

So: don’t use the explorer URL (celoscan.io) as the API base; use the Etherscan V2 API base and set the appropriate Celo chainid. [1][2]

Sources:
[1] Etherscan V2 “Supported Chains” (includes Celo chain IDs) — https://docs.etherscan.io/etherscan-v2/getting-started/supported-chains
[2] Etherscan V2 endpoint example (/v2/api, chainid=...) — https://docs.etherscan.io/api-reference/endpoint/ethblocknumber
[3] Celo docs listing Celoscan as a block explorer — https://docs.celo.org/tooling/overview/network-overview


🌐 Web query:

Hardhat defineConfig blockExplorers etherscan url apiUrl meaning

💡 Result:

In Hardhat’s defineConfig when you add a custom explorer under chainDescriptors -> <chainId> -> blockExplorers -> etherscan, the two fields are:

  • url: the human-facing block explorer website base URL (used for links / UI navigation), e.g. https://example.scan. (hardhat.org)
  • apiUrl: the programmatic API endpoint that the verification plugin calls to submit source code and poll verification status, e.g. https://example.scan/api (or an Etherscan v2 endpoint like https://api.etherscan.io/v2/api). (hardhat.org)

Related: in older @nomicfoundation/hardhat-verify “customChains” config (Hardhat 2-era), the same idea appears as urls.browserURL (website) vs urls.apiURL (API). (docs.linea.build)

Citations:


Fix swapped url and apiUrl in chainDescriptors for all three chains.

In chainDescriptors, the url and apiUrl fields are reversed across celo, scroll, and lisk configurations. According to Hardhat v3 documentation, url should point to the explorer's website (human-facing) and apiUrl should point to the API endpoint used for contract verification.

Current (incorrect):

  • Celo: url → API endpoint, apiUrl → website
  • Scroll: url → API endpoint, apiUrl → website
  • Lisk: url → API endpoint, apiUrl → website

Swap these values so that url points to the web explorer and apiUrl points to the API endpoint.

🤖 Prompt for AI Agents
In `@hardhat.config.ts` around lines 118 - 148, The chainDescriptors entries for
chains 42220 (celo), 534352 (scroll), and 1135 (lisk) have swapped fields: swap
the values of url and apiUrl in each object's blockExplorers.etherscan so that
url is the human-facing explorer website and apiUrl is the API endpoint used for
verification (update the objects under chainDescriptors for 42220, 534352, and
1135 accordingly).

package.json Outdated
Comment on lines +19 to +37
"type": "module",
"devDependencies": {
"@daochild/tronweb-typescript": "^1.1.2",
"@ethersproject/abi": "^5.7.0",
"@ethersproject/providers": "^5.7.2",
"@nomicfoundation/hardhat-chai-matchers": "^1.0.6",
"@nomicfoundation/hardhat-network-helpers": "^1.0.8",
"@nomicfoundation/hardhat-toolbox": "^2.0.2",
"@nomiclabs/hardhat-ethers": "^2.2.3",
"@nomiclabs/hardhat-etherscan": "^3.1.7",
"@openzeppelin/hardhat-upgrades": "^1.27.0",
"@nomicfoundation/hardhat-ethers": "^4.0.4",
"@nomicfoundation/hardhat-ethers-chai-matchers": "^3.0.2",
"@nomicfoundation/hardhat-mocha": "^3.0.9",
"@nomicfoundation/hardhat-network-helpers": "^3.0.3",
"@nomicfoundation/hardhat-toolbox-mocha-ethers": "^3.0.2",
"@nomicfoundation/hardhat-typechain": "^3.0.1",
"@nomicfoundation/hardhat-verify": "^3.0.8",
"@openzeppelin/truffle-upgrades": "^1.21.0",
"@typechain/ethers-v5": "^10.2.0",
"@typechain/hardhat": "^6.1.5",
"chai": "^4.3.7",
"@types/chai": "^5.2.3",
"@types/mocha": "^10.0.10",
"chai": "^6.2.2",
"dotenv": "^16.0.2",
"ethers": "^5.7.2",
"hardhat": "^2.14.0",
"hardhat-deploy": "^0.11.34",
"hardhat-gas-reporter": "^1.0.9",
"solidity-coverage": "^0.8.2",
"ethers": "^6.16.0",
"hardhat": "^3.0.12",
"mocha": "^11.7.5",
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Locate Hardhat config files and check for CommonJS usage
echo "=== Looking for hardhat.config files ==="
fd -a 'hardhat.config.*' 2>/dev/null || echo "No hardhat.config files found"

echo -e "\n=== Checking for require/module.exports in hardhat config ==="
rg -n --glob 'hardhat.config.*' 'require\(|module\.exports' 2>/dev/null || echo "No require/module.exports in hardhat configs"

echo -e "\n=== Checking for require in test files ==="
rg -n --glob 'test/**/*.{js,ts}' 'require\(' 2>/dev/null || echo "No require found in test files"

echo -e "\n=== List all config-like files in root ==="
fd -a '.*config' --type f --max-depth 1 2>/dev/null || true

Repository: paycrest/contracts

Length of output: 1516


🏁 Script executed:

#!/bin/bash
echo "=== All test files in repository ==="
find test -type f \( -name "*.js" -o -name "*.ts" \) 2>/dev/null | sort

echo -e "\n=== Count of require statements in test directory ==="
rg -c 'require\(' test/ 2>/dev/null | grep -v ':0$' || echo "Some files contain require"

echo -e "\n=== Check fixtures directory ==="
rg -n 'require\(' test/fixtures/ 2>/dev/null | head -20

echo -e "\n=== Check utils directory ==="
rg -n 'require\(' test/utils/ 2>/dev/null | head -20

Repository: paycrest/contracts

Length of output: 502


🏁 Script executed:

#!/bin/bash
echo "=== Content of test/fixtures/gateway.js ==="
cat -n test/fixtures/gateway.js 2>/dev/null | head -30

echo -e "\n=== Content of test/fixtures/mockUSDT.js ==="
cat -n test/fixtures/mockUSDT.js 2>/dev/null | head -30

echo -e "\n=== Content of test/utils/utils.manager.js ==="
cat -n test/utils/utils.manager.js 2>/dev/null | head -40

Repository: paycrest/contracts

Length of output: 3687


🏁 Script executed:

#!/bin/bash
echo "=== Content of test/gateway/gateway.ownable.test.js (first 20 lines) ==="
head -20 test/gateway/gateway.ownable.test.js

Repository: paycrest/contracts

Length of output: 594


Convert test files to ESM: 2 test files still use CommonJS require() and will break with "type": "module" enabled.

  • test/gateway/gateway.settleOrder.test.js (6 require statements)
  • test/gateway/gateway.createorder.test.js (6 require statements)

The hardhat config, fixtures, utils, and test/gateway/gateway.ownable.test.js are already ESM-safe. Convert the two listed test files to use import statements instead of require().

🤖 Prompt for AI Agents
In `@package.json` around lines 19 - 37, Two test files
(test/gateway/gateway.settleOrder.test.js and
test/gateway/gateway.createorder.test.js) still use CommonJS require() which
breaks under "type":"module"; change each require(...) to ES module import
statements (use named or default import form matching the original
destructuring), update any module.exports in code they import to export/exports
(or convert exported helpers to export default/named) and adjust references
accordingly, and handle CommonJS globals if present (replace
__dirname/__filename uses with import.meta.url or path utilities). Locate
require calls in those two files and replace them with appropriate import lines
and fix any exported helper modules to use export syntax so tests run under ESM.

Comment on lines +7 to +8
const mockUSDT = await ethers.deployContract("MockUSDT");
console.log("MockUSDT deployed to:", await mockUSDT.getAddress());
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🌐 Web query:

Hardhat v3 ethers.deployContract behavior mining await

💡 Result:

In Hardhat v3 + ethers v6, await ethers.deployContract("Name", ...) gives you an ethers.Contract instance, but it doesn’t guarantee the deployment tx has been mined (i.e., you may still need to wait for the receipt). Hardhat’s ethers plugin documents deployContract as returning Promise<ethers.Contract> without stating it waits for mining, unlike the viem plugin which explicitly says its deployContract waits until the deployment transaction is mined. [1] [2]

So if you need the deployment to be mined (address usable, receipt available, downstream tooling that “detects deploy” based on receipts, etc.), do:

const c = await ethers.deployContract("Counter");
await c.waitForDeployment(); // ethers v6 (ethers v5 was c.deployed())
const addr = await c.getAddress();

This “explicit wait” is also what third-party tooling expects when it says to call waitForDeployment() (ethers v6) to ensure the deployment receipt is available. [3]

If you have automining disabled on Hardhat Network, any waitForDeployment() / tx.wait() can hang forever until you mine a block, because the tx is queued but never included in a block. Fix by manually mining (or re-enabling automine):

await network.provider.send("evm_setAutomine", [false]);

const c = await ethers.deployContract("Counter"); // tx sent/queued
await network.provider.send("evm_mine");          // mine a block
await c.waitForDeployment();                      // now it resolves

[1] [2] [4]

Sources: Hardhat hardhat-ethers docs (deployContract API) [1], Hardhat hardhat-viem docs (deployContract waits for mining) [2], Tenderly Hardhat note about needing waitForDeployment() for ethers deploy helpers [3], explanation of hangs when automine is off and you must evm_mine [4].


🌐 Web query:

ethers deployContract waitForDeployment Hardhat v3

💡 Result:

In Hardhat v3 (with ethers v6), ethers.deployContract() returns a contract object before it’s mined, so you should await waitForDeployment() before reading the address or interacting with it. Hardhat v3’s hardhat-ethers plugin exposes ethers via network.connect(). [1]

import { network } from "hardhat";

const { ethers } = await network.connect();

const counter = await ethers.deployContract("Counter", [42]); // ctor args optional
await counter.waitForDeployment();                            // ethers v6

const address = await counter.getAddress();
console.log("deployed at:", address);

Notes:

  • deployContract(name, constructorArgs?, signer?) is the Hardhat v3 helper for deployments. [1]
  • waitForDeployment() is the ethers v6 way to wait until the deployment tx is mined (ethers v5 used deployed()). [7]

Add waitForDeployment() to ensure the deployment transaction is mined.

In Hardhat v3 with ethers v6, ethers.deployContract() returns the contract instance before the deployment transaction is mined. Without explicitly awaiting waitForDeployment(), calling getAddress() can race and cause test flakiness.

🛠️ Suggested fix
  const mockUSDT = await ethers.deployContract("MockUSDT");
+ await mockUSDT.waitForDeployment();
  console.log("MockUSDT deployed to:", await mockUSDT.getAddress());
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const mockUSDT = await ethers.deployContract("MockUSDT");
console.log("MockUSDT deployed to:", await mockUSDT.getAddress());
const mockUSDT = await ethers.deployContract("MockUSDT");
await mockUSDT.waitForDeployment();
console.log("MockUSDT deployed to:", await mockUSDT.getAddress());
🤖 Prompt for AI Agents
In `@test/fixtures/mockUSDT.js` around lines 7 - 8, The deployment result from
ethers.deployContract("MockUSDT") is returned before the transaction is mined;
after obtaining mockUSDT, await mockUSDT.waitForDeployment() before calling
mockUSDT.getAddress() so the deployment transaction is confirmed and avoids race
conditions in tests.

Comment on lines +56 to +74
async function getSupportedInstitutions() {
const currency = ethers.utils.formatBytes32String("NGN");

const accessBank = {
code: ethers.utils.formatBytes32String("ABNGNGLA"),
name: ethers.utils.formatBytes32String("ACCESS BANK"),
};

const diamondBank = {
code: ethers.utils.formatBytes32String("DBLNNGLA"),
name: ethers.utils.formatBytes32String("DIAMOND BANK"),
};

return {
currency,
accessBank,
diamondBank,
};
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

ethers.utils.formatBytes32String is deprecated in ethers v6.

The ethers.utils namespace was removed in ethers v6. Use ethers.encodeBytes32String instead.

🐛 Proposed fix
 async function getSupportedInstitutions() {
-	const currency = ethers.utils.formatBytes32String("NGN");
+	const currency = ethers.encodeBytes32String("NGN");

 	const accessBank = {
-		code: ethers.utils.formatBytes32String("ABNGNGLA"),
-		name: ethers.utils.formatBytes32String("ACCESS BANK"),
+		code: ethers.encodeBytes32String("ABNGNGLA"),
+		name: ethers.encodeBytes32String("ACCESS BANK"),
 	};

 	const diamondBank = {
-		code: ethers.utils.formatBytes32String("DBLNNGLA"),
-		name: ethers.utils.formatBytes32String("DIAMOND BANK"),
+		code: ethers.encodeBytes32String("DBLNNGLA"),
+		name: ethers.encodeBytes32String("DIAMOND BANK"),
 	};

 	return {
 		currency,
 		accessBank,
 		diamondBank,
 	};
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
async function getSupportedInstitutions() {
const currency = ethers.utils.formatBytes32String("NGN");
const accessBank = {
code: ethers.utils.formatBytes32String("ABNGNGLA"),
name: ethers.utils.formatBytes32String("ACCESS BANK"),
};
const diamondBank = {
code: ethers.utils.formatBytes32String("DBLNNGLA"),
name: ethers.utils.formatBytes32String("DIAMOND BANK"),
};
return {
currency,
accessBank,
diamondBank,
};
}
async function getSupportedInstitutions() {
const currency = ethers.encodeBytes32String("NGN");
const accessBank = {
code: ethers.encodeBytes32String("ABNGNGLA"),
name: ethers.encodeBytes32String("ACCESS BANK"),
};
const diamondBank = {
code: ethers.encodeBytes32String("DBLNNGLA"),
name: ethers.encodeBytes32String("DIAMOND BANK"),
};
return {
currency,
accessBank,
diamondBank,
};
}
🤖 Prompt for AI Agents
In `@test/utils/utils.manager.js` around lines 56 - 74, The code in
getSupportedInstitutions uses the deprecated ethers.utils.formatBytes32String;
replace those calls with ethers.encodeBytes32String for currency,
accessBank.code/name and diamondBank.code/name to be compatible with ethers v6
(update any imports if needed so ethers is the v6 API). Ensure you update all
occurrences in getSupportedInstitutions to call
ethers.encodeBytes32String("...") for the currency, accessBank.code,
accessBank.name, diamondBank.code, and diamondBank.name.

Comment on lines +76 to +79
async function mockMintDeposit(gateway, account, usdc, amount) {
await usdc.connect(account).mint(amount);
await usdc.connect(account).approve(gateway.address, amount);
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

gateway.address is deprecated in ethers v6.

In ethers v6, contract instances no longer have a .address property. Use await gateway.getAddress() instead.

🐛 Proposed fix
 async function mockMintDeposit(gateway, account, usdc, amount) {
 	await usdc.connect(account).mint(amount);
-	await usdc.connect(account).approve(gateway.address, amount);
+	await usdc.connect(account).approve(await gateway.getAddress(), amount);
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
async function mockMintDeposit(gateway, account, usdc, amount) {
await usdc.connect(account).mint(amount);
await usdc.connect(account).approve(gateway.address, amount);
}
async function mockMintDeposit(gateway, account, usdc, amount) {
await usdc.connect(account).mint(amount);
await usdc.connect(account).approve(await gateway.getAddress(), amount);
}
🤖 Prompt for AI Agents
In `@test/utils/utils.manager.js` around lines 76 - 79, The helper mockMintDeposit
uses the deprecated gateway.address (ethers v6); change it to await
gateway.getAddress() and pass that result to usdc.connect(account).approve(...).
Specifically, inside mockMintDeposit ensure you call const gatewayAddr = await
gateway.getAddress() (or inline await gateway.getAddress()) and use gatewayAddr
when calling usdc.connect(account).approve(...), keeping the function async and
preserving the mint call.

Comment on lines +44 to +49
export type MinEthersFactory<C, ARGS> = {
deploy(...a: ARGS[]): Promise<C>
}

export type GetContractTypeFromFactory<F> = F extends MinEthersFactory<infer C, any> ? C : never
export type GetARGsTypeFromFactory<F> = F extends MinEthersFactory<any, any> ? Parameters<F['deploy']> : never
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Fix MinEthersFactory deploy signature to accept the argument tuple, not an array of tuples.

deploy(...a: ARGS[]) makes the method take multiple tuple arguments (e.g., [string, number][]) instead of the usual argument list, which breaks typing for factory deploy calls. This is a type-level correctness issue.

🔧 Proposed fix
-export type MinEthersFactory<C, ARGS> = {
-  deploy(...a: ARGS[]): Promise<C>
-}
+export type MinEthersFactory<C, ARGS extends any[]> = {
+  deploy(...a: ARGS): Promise<C>
+}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
export type MinEthersFactory<C, ARGS> = {
deploy(...a: ARGS[]): Promise<C>
}
export type GetContractTypeFromFactory<F> = F extends MinEthersFactory<infer C, any> ? C : never
export type GetARGsTypeFromFactory<F> = F extends MinEthersFactory<any, any> ? Parameters<F['deploy']> : never
export type MinEthersFactory<C, ARGS extends any[]> = {
deploy(...a: ARGS): Promise<C>
}
export type GetContractTypeFromFactory<F> = F extends MinEthersFactory<infer C, any> ? C : never
export type GetARGsTypeFromFactory<F> = F extends MinEthersFactory<any, any> ? Parameters<F['deploy']> : never
🤖 Prompt for AI Agents
In `@types/ethers-contracts/common.ts` around lines 44 - 49, The deploy signature
on MinEthersFactory is wrong: change deploy(...a: ARGS[]) to accept the argument
tuple rather than an array of tuples (e.g., deploy(...a: ARGS) or
deploy(...args: ARGS)) so the factory deploy takes the normal variadic tuple
parameters; update MinEthersFactory's deploy signature accordingly and keep
GetContractTypeFromFactory and GetARGsTypeFromFactory (which uses
Parameters<F['deploy']>) intact so they resolve the correct tuple types.

- Changed Mocha version from 11.7.5 to 11.3.0 in package.json and package-lock.json.
- Updated hardhat.config.ts to move Mocha timeout settings under the new test configuration structure.
- Adjusted TypeScript configuration to use ES2020 module syntax and bundler resolution.
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI agents
In `@hardhat.config.ts`:
- Around line 11-12: The current env value is built only from
dotenv.config().parsed so system/CI environment variables (process.env) are
ignored; update the construction of env to merge parsed and process.env so that
CI/system variables are included and take precedence. Locate the dotenv.config()
call and the env variable declaration (dotEnvResult, dotenv.config(), env) and
change env to combine dotEnvResult.parsed with process.env (with process.env
overriding parsed) and cast to Record<string,string> as needed.

In `@package.json`:
- Line 9: The "ownership" npm script is malformed because it passes "mocha" as
an argument to "npx hardhat test"; update the "ownership" script (symbol:
ownership) to call Hardhat correctly by removing the extra "mocha" token so it
matches other test scripts (e.g., use "npx hardhat test
test/gateway/gateway.ownable.test.js" or the project’s standard test
invocation).

Comment on lines +11 to +12
const dotEnvResult = dotenv.config();
const env = (dotEnvResult.parsed ?? {}) as Record<string, string>;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Environment variables from system/CI won't be picked up.

dotenv.config().parsed only contains variables loaded from the .env file. System environment variables (e.g., those injected in CI pipelines) won't be included in env, causing builds/deployments to fail in CI where secrets are typically set via system environment, not .env files.

🔧 Proposed fix to use process.env with dotenv loaded
 import dotenv from "dotenv";
-const dotEnvResult = dotenv.config();
-const env = (dotEnvResult.parsed ?? {}) as Record<string, string>;
+dotenv.config();
+const env = process.env as Record<string, string>;
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const dotEnvResult = dotenv.config();
const env = (dotEnvResult.parsed ?? {}) as Record<string, string>;
import dotenv from "dotenv";
dotenv.config();
const env = process.env as Record<string, string>;
🤖 Prompt for AI Agents
In `@hardhat.config.ts` around lines 11 - 12, The current env value is built only
from dotenv.config().parsed so system/CI environment variables (process.env) are
ignored; update the construction of env to merge parsed and process.env so that
CI/system variables are included and take precedence. Locate the dotenv.config()
call and the env variable declaration (dotEnvResult, dotenv.config(), env) and
change env to combine dotEnvResult.parsed with process.env (with process.env
overriding parsed) and cast to Record<string,string> as needed.

"scripts": {
"test": "npx hardhat test",
"ownership": "npx hardhat test test/gateway/gateway.ownable.test.js",
"ownership": "npx hardhat test mocha test/gateway/gateway.ownable.test.js",
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Malformed ownership script command.

The command npx hardhat test mocha test/gateway/gateway.ownable.test.js appears incorrect—it passes mocha as an argument to hardhat test. This will likely fail or behave unexpectedly.

🔧 Proposed fix to match other test scripts
-    "ownership": "npx hardhat test mocha test/gateway/gateway.ownable.test.js",
+    "ownership": "npx hardhat test test/gateway/gateway.ownable.test.js",
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"ownership": "npx hardhat test mocha test/gateway/gateway.ownable.test.js",
"ownership": "npx hardhat test test/gateway/gateway.ownable.test.js",
🤖 Prompt for AI Agents
In `@package.json` at line 9, The "ownership" npm script is malformed because it
passes "mocha" as an argument to "npx hardhat test"; update the "ownership"
script (symbol: ownership) to call Hardhat correctly by removing the extra
"mocha" token so it matches other test scripts (e.g., use "npx hardhat test
test/gateway/gateway.ownable.test.js" or the project’s standard test
invocation).

…ening

- Implemented a new task to print available accounts and their ETH balances.
- Created a task to flatten specified contracts and their dependencies, with license handling.
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Fix all issues with AI agents
In `@hardhat.config.ts`:
- Line 14: The config defines a hardcoded testPrivateKey and falls back to it
when DEPLOYER_PRIVATE_KEY is missing, which is unsafe for production networks;
update the networks configuration (where testPrivateKey is referenced) to stop
using the well-known key by removing the fallback and instead set the accounts
array to be empty when DEPLOYER_PRIVATE_KEY is not present (so deployments fail
fast), ensuring this change is applied for all production networks listed
(mainnet, arbitrum, polygon, base, bsc, optimism, scroll, celo, assetchain,
lisk) and remove or deprecate the testPrivateKey constant usage.

In `@tasks/accounts.ts`:
- Around line 15-21: The console.log call is printing raw wei with the wrong
`%i` specifier; change the second argument to a string-formatted ETH value and
use `%s` in the format string: call hre.ethers.formatEther on the result of
provider.getBalance(account.address) and pass that formatted string to
console.log (update the format string from "%s (%i ETH)" to use "%s (%s ETH)" or
similar), keeping references to account.address, provider.getBalance,
hre.ethers.formatEther and the existing console.log invocation.

In `@tasks/flatten.ts`:
- Around line 23-26: The code writes to outputPath (const outputPath =
path.join("flattened", "Gateway.sol")) with fs.writeFileSync but doesn't ensure
the "flattened" directory exists; before calling fs.writeFileSync(outputPath,
flattened) create the directory for path.dirname(outputPath) (e.g., using
fs.mkdirSync(dir, { recursive: true }) or fs.promises.mkdir(dir, { recursive:
true })) so the write won't throw ENOENT; update the block around outputPath,
path.join, and fs.writeFileSync accordingly.
🧹 Nitpick comments (4)
tasks/flatten.ts (3)

13-14: Output filename is hardcoded regardless of input files.

The task accepts arbitrary files via arguments, but the output is always written to flattened/Gateway.sol. If a user passes different contracts (e.g., ["contracts/Token.sol"]), the output filename would be misleading.

Consider deriving the output name from the input files, or documenting that this task is Gateway-specific.

Also applies to: 24-24


16-17: Comment doesn't match behavior.

The comment says "Remove every line" but the code replaces SPDX-License-Identifier: with License-Identifier: — the lines remain, just renamed. Consider updating the comment to reflect the actual behavior (e.g., "Rename SPDX-License-Identifier to avoid duplicate license warnings").


20-21: Consider a more readable approach for keeping only the first pragma.

The IIFE closure pattern is clever but obscure. A clearer alternative improves maintainability.

♻️ More readable alternative
   // Remove every line started with "pragma experimental ABIEncoderV2;" except the first one
-  flattened = flattened.replace(/pragma experimental ABIEncoderV2;\n/gm, ((i) => (m: any) => (!i++ ? m : ""))(0));
+  let pragmaFound = false;
+  flattened = flattened.replace(/pragma experimental ABIEncoderV2;\n/gm, (match) => {
+    if (!pragmaFound) {
+      pragmaFound = true;
+      return match;
+    }
+    return "";
+  });
tasks/accounts.ts (1)

3-5: Replace the empty interface with a type alias.

Line 3–5 triggers Biome's noEmptyInterface rule (enabled via "recommended": true in your config); a type is clearer and lint-clean. The interface is only used once as a function parameter type and has no usages elsewhere.

Suggested change
-interface AccountTaskArguments {
-  // No argument in this case
-}
+type AccountTaskArguments = Record<string, never>;

const env = (dotEnvResult.parsed ?? {}) as Record<string, string>;

dotenv.config();
const testPrivateKey = "0000000000000000000000000000000000000000000000000000000000000001"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

find . -name "hardhat.config.ts" -o -name "hardhat.config.js" | head -20

Repository: paycrest/contracts

Length of output: 81


🏁 Script executed:

cat -n ./hardhat.config.ts

Repository: paycrest/contracts

Length of output: 5707


🌐 Web query:

Hardhat empty accounts array configuration behavior

💡 Result:

In Hardhat, the networks.<name>.accounts field is not “optional if empty”—it directly determines what signers Hardhat can use:

  • For JSON-RPC (“http”) networks, accounts can be:
    • "remote" (use the node’s unlocked accounts) — this is also the default if you omit accounts. [1]
    • an array of private keys / config variables (use those as local signers). [1]
    • an HD wallet config object (derive accounts). [1]

If you set accounts: [], you are explicitly configuring Hardhat to use zero local accounts. That means Hardhat will typically have no available signer, so anything that needs a default account/wallet client (deployments, sending txs, etc.) can fail with “no account / no wallet client” style errors (e.g., Hardhat 3’s “Default Wallet Client Not Found … if no accounts were configured for the selected network”). [2]

Practical takeaway:

  • If you intended “use the node’s accounts”, omit accounts or set accounts: "remote". [1]
  • If you intended “use my key(s)”, provide at least one key/config variable in the array (not an empty array). [1]

Sources:
[1] Hardhat configuration reference (accounts default and accepted shapes). (hardhat.org)
[2] Hardhat errors reference (missing accounts ⇒ missing default wallet client). (hardhat.org)

Citations:


🌐 Web query:

private key 0x0000000000000000000000000000000000000000000000000000000000000001 well-known test key

💡 Result:

0x000…0001 (i.e., the integer private key 1) is a “well‑known” key because on secp256k1 the public key is computed as P = k·G; with k = 1, the public key is just the generator point G itself. The generator point (in compressed SEC form) is 0279be66…f81798. [1]

Because it’s a real (and trivially guessable) key, people commonly reference it in examples and “puzzle”/demo contexts; it corresponds to widely cited, fixed addresses, e.g.:

  • Bitcoin (P2PKH, compressed pubkey) address: 1BgGZ9tcN4rm9KBzDn7KprQz87SZ26SAMH for private key …0001. [2]
  • Ethereum address: 0x7E5F4552091A69125d5DfCb7b8C2659029395Bdf for the same private key …0001. [3]

It is not safe for testing with real funds on any public chain (bots sweep deposits immediately); use dedicated dev chains or generated test keys instead. [2]


Remove the fallback to a well-known test key on production networks.

The config uses a well-known private key (0x000…0001 → Ethereum address 0x7E5F4552091A69125d5DfCb7b8C2659029395Bdf) as a fallback when DEPLOYER_PRIVATE_KEY is missing. This key is publicly documented in tutorials and actively monitored by bots; any funds sent to it are immediately swept. Using it for mainnet, arbitrum, polygon, base, bsc, optimism, scroll, celo, assetchain, or lisk risks accidental deployment with a completely compromised signer.

Replace with an empty accounts array to fail fast if the env var is missing:

🔐 Suggested fix (apply across all networks)
-const testPrivateKey = "0000000000000000000000000000000000000000000000000000000000000001"
+const deployerPrivateKey = env.DEPLOYER_PRIVATE_KEY;
+const accounts = deployerPrivateKey ? [deployerPrivateKey] : [];
-accounts: [env.DEPLOYER_PRIVATE_KEY || testPrivateKey],
+accounts,
🤖 Prompt for AI Agents
In `@hardhat.config.ts` at line 14, The config defines a hardcoded testPrivateKey
and falls back to it when DEPLOYER_PRIVATE_KEY is missing, which is unsafe for
production networks; update the networks configuration (where testPrivateKey is
referenced) to stop using the well-known key by removing the fallback and
instead set the accounts array to be empty when DEPLOYER_PRIVATE_KEY is not
present (so deployments fail fast), ensuring this change is applied for all
production networks listed (mainnet, arbitrum, polygon, base, bsc, optimism,
scroll, celo, assetchain, lisk) and remove or deprecate the testPrivateKey
constant usage.

Comment on lines +15 to +21
console.log(
"%s (%i ETH)",
account.address,
// hre.ethers.utils.formatEther(
// getBalance returns wei amount, format to ETH amount
await provider.getBalance(account.address)
// )
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

# Find ethers version in package manifests
rg -n "ethers" --type json -g 'package.json' -g 'package-lock.json' -g 'yarn.lock' -g 'pnpm-lock.yaml' | head -20

Repository: paycrest/contracts

Length of output: 1321


🏁 Script executed:

# Also check if package.json exists and read it
if [ -f "package.json" ]; then
  cat package.json | grep -A 5 '"ethers"'
fi

Repository: paycrest/contracts

Length of output: 216


🏁 Script executed:

# Read the actual code in tasks/accounts.ts
if [ -f "tasks/accounts.ts" ]; then
  cat -n tasks/accounts.ts | head -30
fi

Repository: paycrest/contracts

Length of output: 865


Format balances to ETH (not raw wei) and use %s format specifier.

Line 16 uses %i format with a wei value, which will format incorrectly. For ethers v6, extract the balance, format it using hre.ethers.formatEther(), and use %s instead:

Suggested change
-      console.log(
-          "%s (%i ETH)",
-          account.address,
-          // hre.ethers.utils.formatEther(
-              // getBalance returns wei amount, format to ETH amount
-              await provider.getBalance(account.address)
-          // )
-      );
+      const balanceWei = await provider.getBalance(account.address);
+      const balanceEth = hre.ethers.formatEther(balanceWei);
+      console.log("%s (%s ETH)", account.address, balanceEth);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
console.log(
"%s (%i ETH)",
account.address,
// hre.ethers.utils.formatEther(
// getBalance returns wei amount, format to ETH amount
await provider.getBalance(account.address)
// )
const balanceWei = await provider.getBalance(account.address);
const balanceEth = hre.ethers.formatEther(balanceWei);
console.log("%s (%s ETH)", account.address, balanceEth);
🤖 Prompt for AI Agents
In `@tasks/accounts.ts` around lines 15 - 21, The console.log call is printing raw
wei with the wrong `%i` specifier; change the second argument to a
string-formatted ETH value and use `%s` in the format string: call
hre.ethers.formatEther on the result of provider.getBalance(account.address) and
pass that formatted string to console.log (update the format string from "%s (%i
ETH)" to use "%s (%s ETH)" or similar), keeping references to account.address,
provider.getBalance, hre.ethers.formatEther and the existing console.log
invocation.

Comment on lines +23 to +26
// Write to flattened/Gateway.sol
const outputPath = path.join("flattened", "Gateway.sol");
fs.writeFileSync(outputPath, flattened);
console.log(`Flattened contract written to ${outputPath}`);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Ensure the output directory exists before writing.

fs.writeFileSync will throw ENOENT if the flattened directory doesn't exist. Consider creating it first.

🐛 Proposed fix
   // Write to flattened/Gateway.sol
   const outputPath = path.join("flattened", "Gateway.sol");
+  fs.mkdirSync(path.dirname(outputPath), { recursive: true });
   fs.writeFileSync(outputPath, flattened);
   console.log(`Flattened contract written to ${outputPath}`);
🤖 Prompt for AI Agents
In `@tasks/flatten.ts` around lines 23 - 26, The code writes to outputPath (const
outputPath = path.join("flattened", "Gateway.sol")) with fs.writeFileSync but
doesn't ensure the "flattened" directory exists; before calling
fs.writeFileSync(outputPath, flattened) create the directory for
path.dirname(outputPath) (e.g., using fs.mkdirSync(dir, { recursive: true }) or
fs.promises.mkdir(dir, { recursive: true })) so the write won't throw ENOENT;
update the block around outputPath, path.join, and fs.writeFileSync accordingly.

- Changed RPC URLs for various networks in hardhat.config.ts to new endpoints.
- Added chainId for Celo network in the configuration.
- Updated package dependencies, including upgrading Hardhat to version 3.1.9 and adjusting related packages.
- Added license information for several packages in package-lock.json and yarn.lock.
- Removed autogenerated artifacts from the artifacts directory to clean up the project.
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

♻️ Duplicate comments (5)
hardhat.config.ts (3)

11-12: ⚠️ Potential issue | 🟠 Major

Merge .env values with process.env to avoid CI secret loss.

Line 12 builds env from dotenv.config().parsed only, so CI/system-injected variables are ignored.

🔧 Suggested fix
 import dotenv from "dotenv";
 const dotEnvResult = dotenv.config();
-const env = (dotEnvResult.parsed ?? {}) as Record<string, string>;
+const env = {
+  ...(dotEnvResult.parsed ?? {}),
+  ...process.env,
+} as Record<string, string>;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@hardhat.config.ts` around lines 11 - 12, The current code sets env from
dotenv.config().parsed only (dotEnvResult and env), which drops
CI/system-injected variables; change the merge so env combines dotenv.parsed
with existing process.env and preserves CI values by merging parsed first then
spreading process.env over it (i.e., make env a merged object of
dotEnvResult.parsed and process.env) and keep the same Record<string,string>
typing so downstream code still works.

146-147: ⚠️ Potential issue | 🟠 Major

Swap url and apiUrl for Scroll and Lisk descriptors.

In Hardhat chain descriptors, url should be the explorer website and apiUrl the verification API endpoint; these are reversed on Lines 146-147 and 156-157.

🧩 Suggested fix
          name: "scroll",
          blockExplorers: {
            etherscan: {
              name: "Scroll Explorer",
-             url: "https://api.scrollscan.com/api",
-             apiUrl: "https://scrollscan.com/",
+             url: "https://scrollscan.com/",
+             apiUrl: "https://api.scrollscan.com/api",
            },
          },
        },
          name: "lisk",
          blockExplorers: {
            etherscan: {
              name: "Lisk Explorer",
-             url: "https://explorer.lisk.com/api",
-             apiUrl: "https://explorer.lisk.com",
+             url: "https://explorer.lisk.com",
+             apiUrl: "https://explorer.lisk.com/api",
            },
          },
        },

Also applies to: 156-157

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@hardhat.config.ts` around lines 146 - 147, The Scroll and Lisk chain
descriptors have their explorer and API endpoints swapped: in each descriptor
swap the values assigned to the properties url and apiUrl so that url points to
the explorer website (e.g., "https://scrollscan.com/") and apiUrl points to the
verification API endpoint (e.g., "https://api.scrollscan.com/api"); update both
occurrences (the entries that define url and apiUrl in the Scroll descriptor and
the similar url/apiUrl pair in the Lisk descriptor) so the property names
correctly match their intended endpoints.

14-14: ⚠️ Potential issue | 🔴 Critical

Remove the well-known private-key fallback from deploy accounts.

Line 14 introduces a publicly known key and Lines 33/38/43/48/53/58/63/69/74/79/86 use it as fallback, which can lead to compromised deployments.

🔐 Suggested fix
-const testPrivateKey = "0000000000000000000000000000000000000000000000000000000000000001"
+const deployerPrivateKey = env.DEPLOYER_PRIVATE_KEY;
+const accounts = deployerPrivateKey ? [deployerPrivateKey] : [];
-accounts: [env.DEPLOYER_PRIVATE_KEY || testPrivateKey],
+accounts,

Apply the accounts replacement consistently to all configured networks.

Also applies to: 33-33, 38-38, 43-43, 48-48, 53-53, 58-58, 63-63, 69-69, 74-74, 79-79, 86-86

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@hardhat.config.ts` at line 14, Remove the hard-coded testPrivateKey constant
and replace any fallback usage with a secure environment-based accounts value:
change occurrences that reference testPrivateKey in the Hardhat network configs
(the networks.*.accounts fields) to use a process.env variable (e.g.,
process.env.DEPLOYER_PRIVATE_KEY) and default to an empty array when not set
(e.g., process.env.DEPLOYER_PRIVATE_KEY ? [process.env.DEPLOYER_PRIVATE_KEY] :
[]). Ensure this replacement is applied consistently for every network that
currently falls back to testPrivateKey so no network config uses the well-known
private key.
artifacts/build-info/solc-0_8_20-dd5ad135e14cccdeeb5d3ebbf89923097f46a8aa.json (1)

48-49: ⚠️ Potential issue | 🟡 Minor

The generated mock metadata is still internally inconsistent.

Lines 48-49 embed a source file named MockUSDC.sol that declares MockUSDT, uses "MockUDSC" in the constructor, and still reverts with MockUSDT in burnAll(). Pick one identity for this mock and make the filename, contract name, token name, and regenerated artifacts match.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@artifacts/build-info/solc-0_8_20-dd5ad135e14cccdeeb5d3ebbf89923097f46a8aa.json`
around lines 48 - 49, The mock metadata is inconsistent: the file is named
MockUSDC.sol while the contract is declared as MockUSDT, the constructor uses
the token name "MockUDSC" and symbol "MUSDC", and burnAll reverts with
"MockUSDT: Nothing to burn"; pick a single identity (e.g., MockUSDC or MockUSDT)
and make them consistent by renaming the contract declaration to match the
filename (or rename the file to match the contract), update the ERC20
constructor parameters (token name and symbol) to the chosen identity, update
the require/revert string in burnAll to the same contract name, and then
regenerate the build artifacts so the JSON (solc-0_8_20-...) reflects the
corrected filename, contract name, token name/symbol, and revert message.
contracts/ProviderBatchCallAndSponsor.sol (1)

47-52: ⚠️ Potential issue | 🔴 Critical

Harden digest construction: current hash is collision-prone and replayable across chains.

Line 49 uses incremental abi.encodePacked with dynamic bytes, which can produce ambiguous packed payloads. Line 51 also omits domain separation (chainId, contract/account context), so a valid signature can be replayed on another chain for the same account address/nonce.

🔒 Proposed fix
-        bytes memory encodedCalls;
-        for (uint256 i = 0; i < calls.length; i++) {
-            encodedCalls = abi.encodePacked(encodedCalls, calls[i].to, calls[i].value, calls[i].data);
-        }
-        bytes32 digest = keccak256(abi.encodePacked(nonce, encodedCalls));
+        bytes32 digest = keccak256(
+            abi.encode(
+                block.chainid,
+                address(this),
+                nonce,
+                calls
+            )
+        );
#!/bin/bash
# Read-only verification of the current hashing pattern.
rg -n "encodedCalls|abi\\.encodePacked\\(nonce|toEthSignedMessageHash\\(" contracts/ProviderBatchCallAndSponsor.sol
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@contracts/ProviderBatchCallAndSponsor.sol` around lines 47 - 52, The digest
construction is collision-prone and replayable: stop using incremental
abi.encodePacked on dynamic bytes (encodedCalls) and add domain separation
(chainId/contract) or EIP‑712; replace the loop that builds encodedCalls and the
keccak256(abi.encodePacked(nonce, encodedCalls)) with a deterministic encoding
(e.g., use abi.encode for each call with explicit fields and lengths or build a
bytes array of fixed-packed records) and compute the digest over
abi.encode(nonce, chainId, address(this), callsLength, /* deterministic calls
encoding */) or implement an EIP‑712 domain separator; update references to
encodedCalls, calls, nonce, and digest accordingly so signatures cannot collide
or be replayed across chains.
🧹 Nitpick comments (1)
contracts/ProviderBatchCallAndSponsor.sol (1)

45-84: Add automated tests for signature safety and replay guarantees.

Given the security-critical flow, please add tests for digest integrity, nonce replay rejection, and cross-chain/domain separation behavior.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@contracts/ProviderBatchCallAndSponsor.sol` around lines 45 - 84, Add
automated unit tests for signature safety and replay protection focusing on the
execute and _executeBatch flow: (1) a digest integrity test that constructs
encodedCalls exactly as the contract does, computes digest =
keccak256(abi.encodePacked(nonce, encodedCalls)), creates an Ethereum-signed
message and ensures only a signature over that digest (using
ECDSA.toEthSignedMessageHash) is accepted by execute; (2) a nonce replay test
that calls execute successfully once and then reuses the same signature to
assert the second call reverts (nonce increment in _executeBatch must prevent
replay); and (3) a domain/cross-chain separation test that verifies signatures
tied to a different chainId/contract address or altered encodedCalls are
rejected (i.e., modifying chain/domain-separated inputs must change the digest
and fail recovery). Use the execute(Call[] calldata, bytes) and
_executeBatch(Call[] calldata) entrypoints and assert revert messages where
appropriate.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In
`@artifacts/build-info/solc-0_8_20-5eff7bec14d350b270e003042174688bc859428c.json`:
- Line 52: Both settingManagerBool and updateProtocolAddress currently silently
no-op when passed an unknown bytes32 `what`; change them to revert instead. In
settingManagerBool (function settingManagerBool) add an else branch that reverts
with a clear message (e.g., 'Gateway: unknown setting') when `what` is not
'token'; in updateProtocolAddress (function updateProtocolAddress) add an else
that reverts (e.g., 'Gateway: unknown address type') when `what` is neither
'treasury' nor 'aggregator'; keep existing require checks and events unchanged.
- Line 52: Mark the GatewaySettingManager contract as abstract (declare
"abstract contract GatewaySettingManager") to prevent direct deployment without
initialization (MAX_BPS and Ownable2Step not set), and add explicit reverts in
settingManagerBool and updateProtocolAddress for unsupported `what` values
(e.g., revert with "Gateway: invalid setting" or "Gateway: invalid protocol
key") so invalid keys don't silently no-op; locate these changes in the
GatewaySettingManager contract around the settingManagerBool and
updateProtocolAddress functions and ensure onlyOwner and MAX_BPS storage names
are unchanged.

In
`@ignition/deployments/chain-137/artifacts/ProviderBatchCallAndSponsor`#ProviderBatchCallAndSponsor.json:
- Around line 160-161: The signature digest currently built in
ProviderBatchCallAndSponsor (the keccak256(abi.encodePacked(nonce,
encodedCalls)) call that constructs the batch/signature digest) is missing a
chain-specific domain; update the digest computation to include block.chainid
(e.g. keccak256(abi.encodePacked(block.chainid, nonce, encodedCalls))) or
replace the ad-hoc digest with an EIP-712 domain-separated hash, so the digest
function (the method that returns the batch hash / verifies signatures) is bound
to a specific chain and prevents cross-chain replay.

In
`@ignition/deployments/chain-56/build-info/solc-0_8_20-ac89416d23dc73b4d7b4132cc0fb4bb5746fa9da.json`:
- Around line 48-49: The signed digest construction in
ProviderBatchCallAndSponsor::execute is unsafe: it uses abi.encodePacked(nonce,
encodedCalls) which is ambiguous for dynamic bytes and is chain-agnostic;
replace the digest creation to use abi.encode(...) and include chain and
contract context (e.g. encode(nonce, calls payload) via abi.encode, plus
block.chainid and address(this)) so the hash is unambiguous and chain-bound,
update the variable names digest/ethSignedMessageHash generation in execute to
use that new encoding, and after making this change regenerate all deployment
artifacts and ABI/bytecode outputs.

---

Duplicate comments:
In
`@artifacts/build-info/solc-0_8_20-dd5ad135e14cccdeeb5d3ebbf89923097f46a8aa.json`:
- Around line 48-49: The mock metadata is inconsistent: the file is named
MockUSDC.sol while the contract is declared as MockUSDT, the constructor uses
the token name "MockUDSC" and symbol "MUSDC", and burnAll reverts with
"MockUSDT: Nothing to burn"; pick a single identity (e.g., MockUSDC or MockUSDT)
and make them consistent by renaming the contract declaration to match the
filename (or rename the file to match the contract), update the ERC20
constructor parameters (token name and symbol) to the chosen identity, update
the require/revert string in burnAll to the same contract name, and then
regenerate the build artifacts so the JSON (solc-0_8_20-...) reflects the
corrected filename, contract name, token name/symbol, and revert message.

In `@contracts/ProviderBatchCallAndSponsor.sol`:
- Around line 47-52: The digest construction is collision-prone and replayable:
stop using incremental abi.encodePacked on dynamic bytes (encodedCalls) and add
domain separation (chainId/contract) or EIP‑712; replace the loop that builds
encodedCalls and the keccak256(abi.encodePacked(nonce, encodedCalls)) with a
deterministic encoding (e.g., use abi.encode for each call with explicit fields
and lengths or build a bytes array of fixed-packed records) and compute the
digest over abi.encode(nonce, chainId, address(this), callsLength, /*
deterministic calls encoding */) or implement an EIP‑712 domain separator;
update references to encodedCalls, calls, nonce, and digest accordingly so
signatures cannot collide or be replayed across chains.

In `@hardhat.config.ts`:
- Around line 11-12: The current code sets env from dotenv.config().parsed only
(dotEnvResult and env), which drops CI/system-injected variables; change the
merge so env combines dotenv.parsed with existing process.env and preserves CI
values by merging parsed first then spreading process.env over it (i.e., make
env a merged object of dotEnvResult.parsed and process.env) and keep the same
Record<string,string> typing so downstream code still works.
- Around line 146-147: The Scroll and Lisk chain descriptors have their explorer
and API endpoints swapped: in each descriptor swap the values assigned to the
properties url and apiUrl so that url points to the explorer website (e.g.,
"https://scrollscan.com/") and apiUrl points to the verification API endpoint
(e.g., "https://api.scrollscan.com/api"); update both occurrences (the entries
that define url and apiUrl in the Scroll descriptor and the similar url/apiUrl
pair in the Lisk descriptor) so the property names correctly match their
intended endpoints.
- Line 14: Remove the hard-coded testPrivateKey constant and replace any
fallback usage with a secure environment-based accounts value: change
occurrences that reference testPrivateKey in the Hardhat network configs (the
networks.*.accounts fields) to use a process.env variable (e.g.,
process.env.DEPLOYER_PRIVATE_KEY) and default to an empty array when not set
(e.g., process.env.DEPLOYER_PRIVATE_KEY ? [process.env.DEPLOYER_PRIVATE_KEY] :
[]). Ensure this replacement is applied consistently for every network that
currently falls back to testPrivateKey so no network config uses the well-known
private key.

---

Nitpick comments:
In `@contracts/ProviderBatchCallAndSponsor.sol`:
- Around line 45-84: Add automated unit tests for signature safety and replay
protection focusing on the execute and _executeBatch flow: (1) a digest
integrity test that constructs encodedCalls exactly as the contract does,
computes digest = keccak256(abi.encodePacked(nonce, encodedCalls)), creates an
Ethereum-signed message and ensures only a signature over that digest (using
ECDSA.toEthSignedMessageHash) is accepted by execute; (2) a nonce replay test
that calls execute successfully once and then reuses the same signature to
assert the second call reverts (nonce increment in _executeBatch must prevent
replay); and (3) a domain/cross-chain separation test that verifies signatures
tied to a different chainId/contract address or altered encodedCalls are
rejected (i.e., modifying chain/domain-separated inputs must change the digest
and fail recovery). Use the execute(Call[] calldata, bytes) and
_executeBatch(Call[] calldata) entrypoints and assert revert messages where
appropriate.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: a78ebc13-ea63-45b6-9c4b-fc01668f7145

📥 Commits

Reviewing files that changed from the base of the PR and between af5d499 and eeaf0d0.

⛔ Files ignored due to path filters (3)
  • .DS_Store is excluded by !**/.DS_Store
  • package-lock.json is excluded by !**/package-lock.json
  • yarn.lock is excluded by !**/yarn.lock, !**/*.lock
📒 Files selected for processing (72)
  • artifacts/@openzeppelin/contracts/proxy/transparent/ProxyAdmin.sol/ProxyAdmin.json
  • artifacts/@openzeppelin/contracts/proxy/transparent/ProxyAdmin.sol/artifacts.d.ts
  • artifacts/@openzeppelin/contracts/proxy/transparent/TransparentUpgradeableProxy.sol/ITransparentUpgradeableProxy.json
  • artifacts/@openzeppelin/contracts/proxy/transparent/TransparentUpgradeableProxy.sol/TransparentUpgradeableProxy.json
  • artifacts/@openzeppelin/contracts/proxy/transparent/TransparentUpgradeableProxy.sol/artifacts.d.ts
  • artifacts/artifacts.d.ts
  • artifacts/build-info/solc-0_8_20-3e5625247acd926fb100807651351d2edfb86e57.json
  • artifacts/build-info/solc-0_8_20-3e5625247acd926fb100807651351d2edfb86e57.output.json
  • artifacts/build-info/solc-0_8_20-5eff7bec14d350b270e003042174688bc859428c.json
  • artifacts/build-info/solc-0_8_20-5eff7bec14d350b270e003042174688bc859428c.output.json
  • artifacts/build-info/solc-0_8_20-8a0c96454463046c2bb073cb3dc5bd0ce7430cb1.json
  • artifacts/build-info/solc-0_8_20-8a0c96454463046c2bb073cb3dc5bd0ce7430cb1.output.json
  • artifacts/build-info/solc-0_8_20-a7c76073ca3493cfadcb581cc9028731b235b0aa.json
  • artifacts/build-info/solc-0_8_20-a7c76073ca3493cfadcb581cc9028731b235b0aa.output.json
  • artifacts/build-info/solc-0_8_20-ac89416d23dc73b4d7b4132cc0fb4bb5746fa9da.json
  • artifacts/build-info/solc-0_8_20-ac89416d23dc73b4d7b4132cc0fb4bb5746fa9da.output.json
  • artifacts/build-info/solc-0_8_20-d3b0d9b078e3fc2e10375b7a08a7410a43865eda.json
  • artifacts/build-info/solc-0_8_20-d3b0d9b078e3fc2e10375b7a08a7410a43865eda.output.json
  • artifacts/build-info/solc-0_8_20-dd5ad135e14cccdeeb5d3ebbf89923097f46a8aa.json
  • artifacts/build-info/solc-0_8_20-dd5ad135e14cccdeeb5d3ebbf89923097f46a8aa.output.json
  • artifacts/contracts/Gateway.sol/Gateway.json
  • artifacts/contracts/Gateway.sol/artifacts.d.ts
  • artifacts/contracts/GatewaySettingManager.sol/GatewaySettingManager.json
  • artifacts/contracts/GatewaySettingManager.sol/artifacts.d.ts
  • artifacts/contracts/ProviderBatchCallAndSponsor.sol/ProviderBatchCallAndSponsor.json
  • artifacts/contracts/ProviderBatchCallAndSponsor.sol/artifacts.d.ts
  • artifacts/contracts/interfaces/IGateway.sol/IGateway.json
  • artifacts/contracts/interfaces/IGateway.sol/artifacts.d.ts
  • artifacts/contracts/mocks/MockUSDC.sol/MockUSDT.json
  • artifacts/contracts/mocks/MockUSDC.sol/artifacts.d.ts
  • contracts/ProviderBatchCallAndSponsor.sol
  • hardhat.config.ts
  • ignition/deployments/chain-1/artifacts/ProviderBatchCallAndSponsor#ProviderBatchCallAndSponsor.json
  • ignition/deployments/chain-1/build-info/solc-0_8_20-ac89416d23dc73b4d7b4132cc0fb4bb5746fa9da.json
  • ignition/deployments/chain-1/deployed_addresses.json
  • ignition/deployments/chain-1/journal.jsonl
  • ignition/deployments/chain-1135/artifacts/ProviderBatchCallAndSponsor#ProviderBatchCallAndSponsor.json
  • ignition/deployments/chain-1135/build-info/solc-0_8_20-ac89416d23dc73b4d7b4132cc0fb4bb5746fa9da.json
  • ignition/deployments/chain-1135/deployed_addresses.json
  • ignition/deployments/chain-1135/journal.jsonl
  • ignition/deployments/chain-137/artifacts/ProviderBatchCallAndSponsor#ProviderBatchCallAndSponsor.json
  • ignition/deployments/chain-137/build-info/solc-0_8_20-ac89416d23dc73b4d7b4132cc0fb4bb5746fa9da.json
  • ignition/deployments/chain-137/deployed_addresses.json
  • ignition/deployments/chain-137/journal.jsonl
  • ignition/deployments/chain-42161/artifacts/ProviderBatchCallAndSponsor#ProviderBatchCallAndSponsor.json
  • ignition/deployments/chain-42161/build-info/solc-0_8_20-ac89416d23dc73b4d7b4132cc0fb4bb5746fa9da.json
  • ignition/deployments/chain-42161/deployed_addresses.json
  • ignition/deployments/chain-42161/journal.jsonl
  • ignition/deployments/chain-42220/artifacts/ProviderBatchCallAndSponsor#ProviderBatchCallAndSponsor.json
  • ignition/deployments/chain-42220/build-info/solc-0_8_20-ac89416d23dc73b4d7b4132cc0fb4bb5746fa9da.json
  • ignition/deployments/chain-42220/deployed_addresses.json
  • ignition/deployments/chain-42220/journal.jsonl
  • ignition/deployments/chain-56/artifacts/ProviderBatchCallAndSponsor#ProviderBatchCallAndSponsor.json
  • ignition/deployments/chain-56/build-info/solc-0_8_20-ac89416d23dc73b4d7b4132cc0fb4bb5746fa9da.json
  • ignition/deployments/chain-56/deployed_addresses.json
  • ignition/deployments/chain-56/journal.jsonl
  • ignition/deployments/chain-8453/artifacts/ProviderBatchCallAndSponsor#ProviderBatchCallAndSponsor.json
  • ignition/deployments/chain-8453/build-info/solc-0_8_20-ac89416d23dc73b4d7b4132cc0fb4bb5746fa9da.json
  • ignition/deployments/chain-8453/deployed_addresses.json
  • ignition/deployments/chain-8453/journal.jsonl
  • package.json
  • types/ethers-contracts/contracts/ProviderBatchCallAndSponsor.ts
  • types/ethers-contracts/contracts/index.ts
  • types/ethers-contracts/factories/@openzeppelin/contracts/proxy/transparent/ProxyAdmin__factory.ts
  • types/ethers-contracts/factories/@openzeppelin/contracts/proxy/transparent/TransparentUpgradeableProxy.sol/TransparentUpgradeableProxy__factory.ts
  • types/ethers-contracts/factories/contracts/GatewaySettingManager__factory.ts
  • types/ethers-contracts/factories/contracts/Gateway__factory.ts
  • types/ethers-contracts/factories/contracts/ProviderBatchCallAndSponsor__factory.ts
  • types/ethers-contracts/factories/contracts/index.ts
  • types/ethers-contracts/factories/contracts/mocks/MockUSDC.sol/MockUSDT__factory.ts
  • types/ethers-contracts/hardhat.d.ts
  • types/ethers-contracts/index.ts
✅ Files skipped from review due to trivial changes (10)
  • ignition/deployments/chain-56/deployed_addresses.json
  • ignition/deployments/chain-42161/deployed_addresses.json
  • ignition/deployments/chain-42220/deployed_addresses.json
  • artifacts/build-info/solc-0_8_20-3e5625247acd926fb100807651351d2edfb86e57.json
  • ignition/deployments/chain-137/deployed_addresses.json
  • artifacts/build-info/solc-0_8_20-a7c76073ca3493cfadcb581cc9028731b235b0aa.json
  • artifacts/build-info/solc-0_8_20-8a0c96454463046c2bb073cb3dc5bd0ce7430cb1.json
  • ignition/deployments/chain-1/artifacts/ProviderBatchCallAndSponsor#ProviderBatchCallAndSponsor.json
  • artifacts/build-info/solc-0_8_20-d3b0d9b078e3fc2e10375b7a08a7410a43865eda.json
  • artifacts/build-info/solc-0_8_20-d3b0d9b078e3fc2e10375b7a08a7410a43865eda.output.json
🚧 Files skipped from review as they are similar to previous changes (3)
  • artifacts/@openzeppelin/contracts/proxy/transparent/ProxyAdmin.sol/ProxyAdmin.json
  • artifacts/contracts/ProviderBatchCallAndSponsor.sol/artifacts.d.ts
  • artifacts/@openzeppelin/contracts/proxy/transparent/TransparentUpgradeableProxy.sol/TransparentUpgradeableProxy.json

"content": "// SPDX-License-Identifier: MIT\n// OpenZeppelin Contracts (last updated v4.9.4) (utils/Context.sol)\n\npragma solidity ^0.8.0;\nimport {Initializable} from \"../proxy/utils/Initializable.sol\";\n\n/**\n * @dev Provides information about the current execution context, including the\n * sender of the transaction and its data. While these are generally available\n * via msg.sender and msg.data, they should not be accessed in such a direct\n * manner, since when dealing with meta-transactions the account sending and\n * paying for execution may not be the actual sender (as far as an application\n * is concerned).\n *\n * This contract is only required for intermediate, library-like contracts.\n */\nabstract contract ContextUpgradeable is Initializable {\n function __Context_init() internal onlyInitializing {\n }\n\n function __Context_init_unchained() internal onlyInitializing {\n }\n function _msgSender() internal view virtual returns (address) {\n return msg.sender;\n }\n\n function _msgData() internal view virtual returns (bytes calldata) {\n return msg.data;\n }\n\n function _contextSuffixLength() internal view virtual returns (uint256) {\n return 0;\n }\n\n /**\n * @dev This empty reserved space is put in place to allow future versions to add new\n * variables without shifting down storage in the inheritance chain.\n * See https://docs.openzeppelin.com/contracts/4.x/upgradeable#storage_gaps\n */\n uint256[50] private __gap;\n}\n"
},
"project/contracts/GatewaySettingManager.sol": {
"content": "// SPDX-License-Identifier: UNLICENSED\n\n/**\n * @title GatewaySettingManager\n * @dev This contract manages the settings and configurations for the Gateway protocol.\n */\npragma solidity ^0.8.18;\n\nimport '@openzeppelin/contracts-upgradeable/access/Ownable2StepUpgradeable.sol';\n\ncontract GatewaySettingManager is Ownable2StepUpgradeable {\n\tuint256 internal MAX_BPS;\n\tuint64 internal protocolFeePercent; // DEPRECATED — kept for proxy storage compatibility (do not remove)\n\taddress internal treasuryAddress;\n\taddress internal _aggregatorAddress;\n\tmapping(address => uint256) internal _isTokenSupported;\n\n\t// Token-specific fee settings\n\tstruct TokenFeeSettings {\n\t\tuint256 senderToProvider; // % of sender fee that goes to provider (local mode)\n\t\tuint256 providerToAggregator; // % of provider's share that goes to aggregator (local mode)\n\t\tuint256 senderToAggregator; // % of sender fee that goes to aggregator (fx mode)\n\t\tuint256 providerToAggregatorFx; // % of transaction amount provider pays to aggregator (fx mode)\n\t}\n\n\tmapping(address => TokenFeeSettings) internal _tokenFeeSettings;\n\n\tuint256[49] private __gap;\n\n\tevent SettingManagerBool(bytes32 indexed what, address indexed value, uint256 status);\n\tevent ProtocolAddressUpdated(bytes32 indexed what, address indexed treasuryAddress);\n\tevent SetFeeRecipient(address indexed treasuryAddress);\n\tevent TokenFeeSettingsUpdated(\n\t\taddress indexed token,\n\t\tuint256 senderToProvider,\n\t\tuint256 providerToAggregator,\n\t\tuint256 senderToAggregator,\n\t\tuint256 providerToAggregatorFx\n\t);\n\n\t/* ##################################################################\n OWNER FUNCTIONS\n ################################################################## */\n\n\t/**\n\t * @dev Sets the boolean value for a specific setting.\n\t * @param what The setting to be updated.\n\t * @param value The address or value associated with the setting.\n\t * @param status The boolean value to be set.\n\t * Requirements:\n\t * - The value must not be a zero address.\n\t */\n\tfunction settingManagerBool(bytes32 what, address value, uint256 status) external onlyOwner {\n\t\trequire(value != address(0), 'Gateway: zero address');\n\t\trequire(status == 1 || status == 2, 'Gateway: invalid status');\n\t\tif (what == 'token') {\n\t\t\t_isTokenSupported[value] = status;\n\t\t\temit SettingManagerBool(what, value, status);\n\t\t}\n\t}\n\n\t/**\n\t * @dev Updates a protocol address.\n\t * @param what The address type to be updated (treasury or aggregator).\n\t * @param value The new address to be set.\n\t * Requirements:\n\t * - The value must not be a zero address.\n\t */\n\tfunction updateProtocolAddress(bytes32 what, address value) external onlyOwner {\n\t\trequire(value != address(0), 'Gateway: zero address');\n\t\tbool updated;\n\t\tif (what == 'treasury') {\n\t\t\trequire(treasuryAddress != value, 'Gateway: treasury address already set');\n\t\t\ttreasuryAddress = value;\n\t\t\tupdated = true;\n\t\t} else if (what == 'aggregator') {\n\t\t\trequire(_aggregatorAddress != value, 'Gateway: aggregator address already set');\n\t\t\t_aggregatorAddress = value;\n\t\t\tupdated = true;\n\t\t}\n\t\tif (updated) {\n\t\t\temit ProtocolAddressUpdated(what, value);\n\t\t}\n\t}\n\n\t/**\n\t * @dev Sets token-specific fee settings for stablecoins.\n\t * @param token The token address to configure.\n\t * @param senderToProvider Percentage of sender fee that goes to provider (local mode).\n\t * @param providerToAggregator Percentage of provider's share that goes to aggregator (local mode).\n\t * @param senderToAggregator Percentage of sender fee that goes to aggregator (fx mode).\n\t * @param providerToAggregatorFx Percentage of transaction amount provider pays to aggregator (fx mode).\n\t * Requirements:\n\t * - The token must be supported.\n\t * - Fee percentages must be within valid ranges.\n\t */\n\tfunction setTokenFeeSettings(\n\t\taddress token,\n\t\tuint256 senderToProvider,\n\t\tuint256 providerToAggregator,\n\t\tuint256 senderToAggregator,\n\t\tuint256 providerToAggregatorFx\n\t) external onlyOwner {\n\t\trequire(_isTokenSupported[token] == 1, 'Gateway: token not supported');\n\t\trequire(senderToProvider <= MAX_BPS, 'Gateway: invalid sender to provider');\n\t\trequire(providerToAggregator <= MAX_BPS, 'Gateway: invalid provider to aggregator');\n\t\trequire(senderToAggregator <= MAX_BPS, 'Gateway: invalid sender to aggregator');\n\t\trequire(providerToAggregatorFx <= MAX_BPS, 'Gateway: invalid provider to aggregator fx');\n\n\t\t_tokenFeeSettings[token] = TokenFeeSettings({\n\t\t\tsenderToProvider: senderToProvider,\n\t\t\tproviderToAggregator: providerToAggregator,\n\t\t\tsenderToAggregator: senderToAggregator,\n\t\t\tproviderToAggregatorFx: providerToAggregatorFx\n\t\t});\n\n\t\temit TokenFeeSettingsUpdated(\n\t\t\ttoken,\n\t\t\tsenderToProvider,\n\t\t\tproviderToAggregator,\n\t\t\tsenderToAggregator,\n\t\t\tproviderToAggregatorFx\n\t\t);\n\t}\n\n\t/**\n\t * @dev Gets token-specific fee settings.\n\t * @param token The token address to query.\n\t * @return TokenFeeSettings struct containing all fee settings for the token.\n\t */\n\tfunction getTokenFeeSettings(address token) external view returns (TokenFeeSettings memory) {\n\t\treturn _tokenFeeSettings[token];\n\t}\n}\n"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Revert on unsupported admin keys instead of silently no-oping.

Both settingManagerBool and updateProtocolAddress succeed on unknown what values without changing state or emitting an event. A typo in an ops script will look like a successful config change.

Suggested change
 function settingManagerBool(bytes32 what, address value, uint256 status) external onlyOwner {
 	require(value != address(0), 'Gateway: zero address');
 	require(status == 1 || status == 2, 'Gateway: invalid status');
 	if (what == 'token') {
 		_isTokenSupported[value] = status;
 		emit SettingManagerBool(what, value, status);
+		return;
 	}
+	revert('Gateway: unsupported setting');
 }
 
 function updateProtocolAddress(bytes32 what, address value) external onlyOwner {
 	require(value != address(0), 'Gateway: zero address');
 	bool updated;
 	if (what == 'treasury') {
 		require(treasuryAddress != value, 'Gateway: treasury address already set');
 		treasuryAddress = value;
 		updated = true;
 	} else if (what == 'aggregator') {
 		require(_aggregatorAddress != value, 'Gateway: aggregator address already set');
 		_aggregatorAddress = value;
 		updated = true;
 	}
 	if (updated) {
 		emit ProtocolAddressUpdated(what, value);
+		return;
 	}
+	revert('Gateway: unsupported protocol address');
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"content": "// SPDX-License-Identifier: UNLICENSED\n\n/**\n * @title GatewaySettingManager\n * @dev This contract manages the settings and configurations for the Gateway protocol.\n */\npragma solidity ^0.8.18;\n\nimport '@openzeppelin/contracts-upgradeable/access/Ownable2StepUpgradeable.sol';\n\ncontract GatewaySettingManager is Ownable2StepUpgradeable {\n\tuint256 internal MAX_BPS;\n\tuint64 internal protocolFeePercent; // DEPRECATED — kept for proxy storage compatibility (do not remove)\n\taddress internal treasuryAddress;\n\taddress internal _aggregatorAddress;\n\tmapping(address => uint256) internal _isTokenSupported;\n\n\t// Token-specific fee settings\n\tstruct TokenFeeSettings {\n\t\tuint256 senderToProvider; // % of sender fee that goes to provider (local mode)\n\t\tuint256 providerToAggregator; // % of provider's share that goes to aggregator (local mode)\n\t\tuint256 senderToAggregator; // % of sender fee that goes to aggregator (fx mode)\n\t\tuint256 providerToAggregatorFx; // % of transaction amount provider pays to aggregator (fx mode)\n\t}\n\n\tmapping(address => TokenFeeSettings) internal _tokenFeeSettings;\n\n\tuint256[49] private __gap;\n\n\tevent SettingManagerBool(bytes32 indexed what, address indexed value, uint256 status);\n\tevent ProtocolAddressUpdated(bytes32 indexed what, address indexed treasuryAddress);\n\tevent SetFeeRecipient(address indexed treasuryAddress);\n\tevent TokenFeeSettingsUpdated(\n\t\taddress indexed token,\n\t\tuint256 senderToProvider,\n\t\tuint256 providerToAggregator,\n\t\tuint256 senderToAggregator,\n\t\tuint256 providerToAggregatorFx\n\t);\n\n\t/* ##################################################################\n OWNER FUNCTIONS\n ################################################################## */\n\n\t/**\n\t * @dev Sets the boolean value for a specific setting.\n\t * @param what The setting to be updated.\n\t * @param value The address or value associated with the setting.\n\t * @param status The boolean value to be set.\n\t * Requirements:\n\t * - The value must not be a zero address.\n\t */\n\tfunction settingManagerBool(bytes32 what, address value, uint256 status) external onlyOwner {\n\t\trequire(value != address(0), 'Gateway: zero address');\n\t\trequire(status == 1 || status == 2, 'Gateway: invalid status');\n\t\tif (what == 'token') {\n\t\t\t_isTokenSupported[value] = status;\n\t\t\temit SettingManagerBool(what, value, status);\n\t\t}\n\t}\n\n\t/**\n\t * @dev Updates a protocol address.\n\t * @param what The address type to be updated (treasury or aggregator).\n\t * @param value The new address to be set.\n\t * Requirements:\n\t * - The value must not be a zero address.\n\t */\n\tfunction updateProtocolAddress(bytes32 what, address value) external onlyOwner {\n\t\trequire(value != address(0), 'Gateway: zero address');\n\t\tbool updated;\n\t\tif (what == 'treasury') {\n\t\t\trequire(treasuryAddress != value, 'Gateway: treasury address already set');\n\t\t\ttreasuryAddress = value;\n\t\t\tupdated = true;\n\t\t} else if (what == 'aggregator') {\n\t\t\trequire(_aggregatorAddress != value, 'Gateway: aggregator address already set');\n\t\t\t_aggregatorAddress = value;\n\t\t\tupdated = true;\n\t\t}\n\t\tif (updated) {\n\t\t\temit ProtocolAddressUpdated(what, value);\n\t\t}\n\t}\n\n\t/**\n\t * @dev Sets token-specific fee settings for stablecoins.\n\t * @param token The token address to configure.\n\t * @param senderToProvider Percentage of sender fee that goes to provider (local mode).\n\t * @param providerToAggregator Percentage of provider's share that goes to aggregator (local mode).\n\t * @param senderToAggregator Percentage of sender fee that goes to aggregator (fx mode).\n\t * @param providerToAggregatorFx Percentage of transaction amount provider pays to aggregator (fx mode).\n\t * Requirements:\n\t * - The token must be supported.\n\t * - Fee percentages must be within valid ranges.\n\t */\n\tfunction setTokenFeeSettings(\n\t\taddress token,\n\t\tuint256 senderToProvider,\n\t\tuint256 providerToAggregator,\n\t\tuint256 senderToAggregator,\n\t\tuint256 providerToAggregatorFx\n\t) external onlyOwner {\n\t\trequire(_isTokenSupported[token] == 1, 'Gateway: token not supported');\n\t\trequire(senderToProvider <= MAX_BPS, 'Gateway: invalid sender to provider');\n\t\trequire(providerToAggregator <= MAX_BPS, 'Gateway: invalid provider to aggregator');\n\t\trequire(senderToAggregator <= MAX_BPS, 'Gateway: invalid sender to aggregator');\n\t\trequire(providerToAggregatorFx <= MAX_BPS, 'Gateway: invalid provider to aggregator fx');\n\n\t\t_tokenFeeSettings[token] = TokenFeeSettings({\n\t\t\tsenderToProvider: senderToProvider,\n\t\t\tproviderToAggregator: providerToAggregator,\n\t\t\tsenderToAggregator: senderToAggregator,\n\t\t\tproviderToAggregatorFx: providerToAggregatorFx\n\t\t});\n\n\t\temit TokenFeeSettingsUpdated(\n\t\t\ttoken,\n\t\t\tsenderToProvider,\n\t\t\tproviderToAggregator,\n\t\t\tsenderToAggregator,\n\t\t\tproviderToAggregatorFx\n\t\t);\n\t}\n\n\t/**\n\t * @dev Gets token-specific fee settings.\n\t * @param token The token address to query.\n\t * @return TokenFeeSettings struct containing all fee settings for the token.\n\t */\n\tfunction getTokenFeeSettings(address token) external view returns (TokenFeeSettings memory) {\n\t\treturn _tokenFeeSettings[token];\n\t}\n}\n"
function settingManagerBool(bytes32 what, address value, uint256 status) external onlyOwner {
require(value != address(0), 'Gateway: zero address');
require(status == 1 || status == 2, 'Gateway: invalid status');
if (what == 'token') {
_isTokenSupported[value] = status;
emit SettingManagerBool(what, value, status);
return;
}
revert('Gateway: unsupported setting');
}
function updateProtocolAddress(bytes32 what, address value) external onlyOwner {
require(value != address(0), 'Gateway: zero address');
bool updated;
if (what == 'treasury') {
require(treasuryAddress != value, 'Gateway: treasury address already set');
treasuryAddress = value;
updated = true;
} else if (what == 'aggregator') {
require(_aggregatorAddress != value, 'Gateway: aggregator address already set');
_aggregatorAddress = value;
updated = true;
}
if (updated) {
emit ProtocolAddressUpdated(what, value);
return;
}
revert('Gateway: unsupported protocol address');
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@artifacts/build-info/solc-0_8_20-5eff7bec14d350b270e003042174688bc859428c.json`
at line 52, Both settingManagerBool and updateProtocolAddress currently silently
no-op when passed an unknown bytes32 `what`; change them to revert instead. In
settingManagerBool (function settingManagerBool) add an else branch that reverts
with a clear message (e.g., 'Gateway: unknown setting') when `what` is not
'token'; in updateProtocolAddress (function updateProtocolAddress) add an else
that reverts (e.g., 'Gateway: unknown address type') when `what` is neither
'treasury' nor 'aggregator'; keep existing require checks and events unchanged.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

manager=$(fd '^GatewaySettingManager\.sol$' . | head -n1)
test -n "$manager"

echo "=== GatewaySettingManager source ==="
sed -n '1,220p' "$manager"

echo
echo "=== init / abstract / MAX_BPS checks ==="
rg -n -C2 'abstract contract GatewaySettingManager|contract GatewaySettingManager|initializer|reinitializer|constructor|__Ownable2Step_init|MAX_BPS' "$manager"

gateway=$(fd '^Gateway\.sol$' . | head -n1 || true)
if [ -n "${gateway:-}" ]; then
  echo
  echo "=== Gateway inheritance / initialization context ==="
  rg -n -C3 'GatewaySettingManager|__Ownable2Step_init|MAX_BPS\s*=|initializer|initialize' "$gateway"
fi

Repository: paycrest/contracts

Length of output: 199


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "=== Finding GatewaySettingManager.sol (search entire repo) ==="
fd 'GatewaySettingManager\.sol$' --type f

echo
echo "=== Finding Gateway.sol ==="
fd 'Gateway\.sol$' --type f

Repository: paycrest/contracts

Length of output: 267


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "=== GatewaySettingManager.sol source ==="
cat -n contracts/GatewaySettingManager.sol

echo
echo "=== Gateway.sol (first 250 lines) ==="
head -n 250 contracts/Gateway.sol | cat -n

Repository: paycrest/contracts

Length of output: 15642


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "=== Searching for deployment/initialization patterns ==="
rg -n 'GatewaySettingManager' --type sol -A 3 -B 1 contracts/

echo
echo "=== Checking for test files that deploy these contracts ==="
fd '\.(test|spec)\.(js|ts|sol)$' --type f | head -20

Repository: paycrest/contracts

Length of output: 163


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "=== Searching for GatewaySettingManager usage ==="
rg 'GatewaySettingManager' -A 2 -B 1

echo
echo "=== Checking for test files ==="
find . -type f \( -name "*.test.js" -o -name "*.spec.js" -o -name "*.test.ts" -o -name "*.spec.ts" \)

Repository: paycrest/contracts

Length of output: 50376


Mark GatewaySettingManager as abstract to prevent accidental direct deployment.

GatewaySettingManager is a concrete contract with no initializer. While Gateway correctly initializes MAX_BPS and calls __Ownable2Step_init() in its initialize() function, direct deployment of GatewaySettingManager alone would leave it ownerless with MAX_BPS == 0, making all onlyOwner functions unreachable. Since this contract is only meant as a mixin for Gateway, declaring it abstract enforces this constraint and prevents deployment errors.

Additionally, settingManagerBool (line 53) and updateProtocolAddress (line 69) silently ignore unsupported what keys instead of reverting, which could mask configuration errors. Consider adding a revert for invalid keys.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@artifacts/build-info/solc-0_8_20-5eff7bec14d350b270e003042174688bc859428c.json`
at line 52, Mark the GatewaySettingManager contract as abstract (declare
"abstract contract GatewaySettingManager") to prevent direct deployment without
initialization (MAX_BPS and Ownable2Step not set), and add explicit reverts in
settingManagerBool and updateProtocolAddress for unsupported `what` values
(e.g., revert with "Gateway: invalid setting" or "Gateway: invalid protocol
key") so invalid keys don't silently no-op; locate these changes in the
GatewaySettingManager contract around the settingManagerBool and
updateProtocolAddress functions and ensure onlyOwner and MAX_BPS storage names
are unchanged.

Comment on lines +160 to +161
"bytecode": "0x608060405234801561000f575f80fd5b50610aad8061001d5f395ff3fe608060405260043610610034575f3560e01c80633f707e6b1461003d5780636171d1c914610050578063affed0e01461006357005b3661003b57005b005b61003b61004b366004610709565b610089565b61003b61005e366004610748565b6100c8565b34801561006e575f80fd5b506100775f5481565b60405190815260200160405180910390f35b60405162461bcd60e51b815260206004820152600f60248201526e139bdd081a5b5c1b195b595b9d1959608a1b60448201526064015b60405180910390fd5b60605f5b8481101561019957818686838181106100e7576100e76107d7565b90506020028101906100f991906107eb565b610107906020810190610824565b878784818110610119576101196107d7565b905060200281019061012b91906107eb565b60200135888885818110610141576101416107d7565b905060200281019061015391906107eb565b610161906040810190610844565b6040516020016101759594939291906108b4565b60405160208183030381529060405291508080610191906108f5565b9150506100cc565b505f8054826040516020016101af929190610919565b6040516020818303038152906040528051906020012090505f6101fe827f19457468657265756d205369676e6564204d6573736167653a0a3332000000005f908152601c91909152603c902090565b90505f6102408287878080601f0160208091040260200160405190810160405280939291908181526020018383808284375f920191909152506102a292505050565b90506001600160a01b038116301461028e5760405162461bcd60e51b8152602060048201526011602482015270496e76616c6964207369676e617475726560781b60448201526064016100bf565b61029888886102c4565b5050505050505050565b5f805f6102af8585610361565b915091506102bc816103a3565b509392505050565b5f8054908190806102d4836108f5565b91905055505f5b828110156103215761030f8484838181106102f8576102f86107d7565b905060200281019061030a91906107eb565b6104ef565b80610319816108f5565b9150506102db565b50807f280bb3599696acbf79fb8ffcde81a57337b52500f789600fbb1cff9b4cbaba39848460405161035492919061095a565b60405180910390a2505050565b5f808251604103610395576020830151604084015160608501515f1a6103898782858561060b565b9450945050505061039c565b505f905060025b9250929050565b5f8160048111156103b6576103b6610a32565b036103be5750565b60018160048111156103d2576103d2610a32565b0361041f5760405162461bcd60e51b815260206004820152601860248201527f45434453413a20696e76616c6964207369676e6174757265000000000000000060448201526064016100bf565b600281600481111561043357610433610a32565b036104805760405162461bcd60e51b815260206004820152601f60248201527f45434453413a20696e76616c6964207369676e6174757265206c656e6774680060448201526064016100bf565b600381600481111561049457610494610a32565b036104ec5760405162461bcd60e51b815260206004820152602260248201527f45434453413a20696e76616c6964207369676e6174757265202773272076616c604482015261756560f01b60648201526084016100bf565b50565b5f6104fd6020830183610824565b6001600160a01b031660208301356105186040850185610844565b604051610526929190610a46565b5f6040518083038185875af1925050503d805f8114610560576040519150601f19603f3d011682016040523d82523d5f602084013e610565565b606091505b50509050806105a65760405162461bcd60e51b815260206004820152600d60248201526c10d85b1b081c995d995c9d1959609a1b60448201526064016100bf565b6105b36020830183610824565b6001600160a01b0316337fed7e8f919df9cc0d0ad8b4057d084ebf319b630564d5da283e14751adc931f3a60208501356105f06040870187610844565b6040516105ff93929190610a55565b60405180910390a35050565b5f807f7fffffffffffffffffffffffffffffff5d576e7357a4501ddfe92f46681b20a083111561064057505f905060036106bf565b604080515f8082526020820180845289905260ff881692820192909252606081018690526080810185905260019060a0016020604051602081039080840390855afa158015610691573d5f803e3d5ffd5b5050604051601f1901519150506001600160a01b0381166106b9575f600192509250506106bf565b91505f90505b94509492505050565b5f8083601f8401126106d8575f80fd5b50813567ffffffffffffffff8111156106ef575f80fd5b6020830191508360208260051b850101111561039c575f80fd5b5f806020838503121561071a575f80fd5b823567ffffffffffffffff811115610730575f80fd5b61073c858286016106c8565b90969095509350505050565b5f805f806040858703121561075b575f80fd5b843567ffffffffffffffff80821115610772575f80fd5b61077e888389016106c8565b90965094506020870135915080821115610796575f80fd5b818701915087601f8301126107a9575f80fd5b8135818111156107b7575f80fd5b8860208285010111156107c8575f80fd5b95989497505060200194505050565b634e487b7160e01b5f52603260045260245ffd5b5f8235605e198336030181126107ff575f80fd5b9190910192915050565b80356001600160a01b038116811461081f575f80fd5b919050565b5f60208284031215610834575f80fd5b61083d82610809565b9392505050565b5f808335601e19843603018112610859575f80fd5b83018035915067ffffffffffffffff821115610873575f80fd5b60200191503681900382131561039c575f80fd5b5f81515f5b818110156108a6576020818501810151868301520161088c565b505f93019283525090919050565b5f6108bf8288610887565b6bffffffffffffffffffffffff198760601b168152856014820152838560348301375f9301603401928352509095945050505050565b5f6001820161091257634e487b7160e01b5f52601160045260245ffd5b5060010190565b8281525f61092a6020830184610887565b949350505050565b81835281816020850137505f828201602090810191909152601f909101601f19169091010190565b60208082528181018390525f906040808401600586901b8501820187855b88811015610a2457878303603f190184528135368b9003605e1901811261099d575f80fd5b8a0160606001600160a01b036109b283610809565b168552878201358886015286820135601e198336030181126109d2575f80fd5b90910187810191903567ffffffffffffffff8111156109ef575f80fd5b8036038313156109fd575f80fd5b8188870152610a0f8287018285610932565b96890196955050509186019150600101610978565b509098975050505050505050565b634e487b7160e01b5f52602160045260245ffd5b818382375f9101908152919050565b838152604060208201525f610a6e604083018486610932565b9594505050505056fea264697066735822122012a6d23bfa08d943a2faf20b379215aa8d9b6f0dc01d04c0e51d89da837b5e1664736f6c63430008140033",
"deployedBytecode": "0x608060405260043610610034575f3560e01c80633f707e6b1461003d5780636171d1c914610050578063affed0e01461006357005b3661003b57005b005b61003b61004b366004610709565b610089565b61003b61005e366004610748565b6100c8565b34801561006e575f80fd5b506100775f5481565b60405190815260200160405180910390f35b60405162461bcd60e51b815260206004820152600f60248201526e139bdd081a5b5c1b195b595b9d1959608a1b60448201526064015b60405180910390fd5b60605f5b8481101561019957818686838181106100e7576100e76107d7565b90506020028101906100f991906107eb565b610107906020810190610824565b878784818110610119576101196107d7565b905060200281019061012b91906107eb565b60200135888885818110610141576101416107d7565b905060200281019061015391906107eb565b610161906040810190610844565b6040516020016101759594939291906108b4565b60405160208183030381529060405291508080610191906108f5565b9150506100cc565b505f8054826040516020016101af929190610919565b6040516020818303038152906040528051906020012090505f6101fe827f19457468657265756d205369676e6564204d6573736167653a0a3332000000005f908152601c91909152603c902090565b90505f6102408287878080601f0160208091040260200160405190810160405280939291908181526020018383808284375f920191909152506102a292505050565b90506001600160a01b038116301461028e5760405162461bcd60e51b8152602060048201526011602482015270496e76616c6964207369676e617475726560781b60448201526064016100bf565b61029888886102c4565b5050505050505050565b5f805f6102af8585610361565b915091506102bc816103a3565b509392505050565b5f8054908190806102d4836108f5565b91905055505f5b828110156103215761030f8484838181106102f8576102f86107d7565b905060200281019061030a91906107eb565b6104ef565b80610319816108f5565b9150506102db565b50807f280bb3599696acbf79fb8ffcde81a57337b52500f789600fbb1cff9b4cbaba39848460405161035492919061095a565b60405180910390a2505050565b5f808251604103610395576020830151604084015160608501515f1a6103898782858561060b565b9450945050505061039c565b505f905060025b9250929050565b5f8160048111156103b6576103b6610a32565b036103be5750565b60018160048111156103d2576103d2610a32565b0361041f5760405162461bcd60e51b815260206004820152601860248201527f45434453413a20696e76616c6964207369676e6174757265000000000000000060448201526064016100bf565b600281600481111561043357610433610a32565b036104805760405162461bcd60e51b815260206004820152601f60248201527f45434453413a20696e76616c6964207369676e6174757265206c656e6774680060448201526064016100bf565b600381600481111561049457610494610a32565b036104ec5760405162461bcd60e51b815260206004820152602260248201527f45434453413a20696e76616c6964207369676e6174757265202773272076616c604482015261756560f01b60648201526084016100bf565b50565b5f6104fd6020830183610824565b6001600160a01b031660208301356105186040850185610844565b604051610526929190610a46565b5f6040518083038185875af1925050503d805f8114610560576040519150601f19603f3d011682016040523d82523d5f602084013e610565565b606091505b50509050806105a65760405162461bcd60e51b815260206004820152600d60248201526c10d85b1b081c995d995c9d1959609a1b60448201526064016100bf565b6105b36020830183610824565b6001600160a01b0316337fed7e8f919df9cc0d0ad8b4057d084ebf319b630564d5da283e14751adc931f3a60208501356105f06040870187610844565b6040516105ff93929190610a55565b60405180910390a35050565b5f807f7fffffffffffffffffffffffffffffff5d576e7357a4501ddfe92f46681b20a083111561064057505f905060036106bf565b604080515f8082526020820180845289905260ff881692820192909252606081018690526080810185905260019060a0016020604051602081039080840390855afa158015610691573d5f803e3d5ffd5b5050604051601f1901519150506001600160a01b0381166106b9575f600192509250506106bf565b91505f90505b94509492505050565b5f8083601f8401126106d8575f80fd5b50813567ffffffffffffffff8111156106ef575f80fd5b6020830191508360208260051b850101111561039c575f80fd5b5f806020838503121561071a575f80fd5b823567ffffffffffffffff811115610730575f80fd5b61073c858286016106c8565b90969095509350505050565b5f805f806040858703121561075b575f80fd5b843567ffffffffffffffff80821115610772575f80fd5b61077e888389016106c8565b90965094506020870135915080821115610796575f80fd5b818701915087601f8301126107a9575f80fd5b8135818111156107b7575f80fd5b8860208285010111156107c8575f80fd5b95989497505060200194505050565b634e487b7160e01b5f52603260045260245ffd5b5f8235605e198336030181126107ff575f80fd5b9190910192915050565b80356001600160a01b038116811461081f575f80fd5b919050565b5f60208284031215610834575f80fd5b61083d82610809565b9392505050565b5f808335601e19843603018112610859575f80fd5b83018035915067ffffffffffffffff821115610873575f80fd5b60200191503681900382131561039c575f80fd5b5f81515f5b818110156108a6576020818501810151868301520161088c565b505f93019283525090919050565b5f6108bf8288610887565b6bffffffffffffffffffffffff198760601b168152856014820152838560348301375f9301603401928352509095945050505050565b5f6001820161091257634e487b7160e01b5f52601160045260245ffd5b5060010190565b8281525f61092a6020830184610887565b949350505050565b81835281816020850137505f828201602090810191909152601f909101601f19169091010190565b60208082528181018390525f906040808401600586901b8501820187855b88811015610a2457878303603f190184528135368b9003605e1901811261099d575f80fd5b8a0160606001600160a01b036109b283610809565b168552878201358886015286820135601e198336030181126109d2575f80fd5b90910187810191903567ffffffffffffffff8111156109ef575f80fd5b8036038313156109fd575f80fd5b8188870152610a0f8287018285610932565b96890196955050509186019150600101610978565b509098975050505050505050565b634e487b7160e01b5f52602160045260245ffd5b818382375f9101908152919050565b838152604060208201525f610a6e604083018486610932565b9594505050505056fea264697066735822122012a6d23bfa08d943a2faf20b379215aa8d9b6f0dc01d04c0e51d89da837b5e1664736f6c63430008140033",
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

file=$(fd '^ProviderBatchCallAndSponsor\.sol$' . | head -n1)
test -n "$file"

echo "=== ProviderBatchCallAndSponsor source ==="
sed -n '1,260p' "$file"

echo
echo "=== signature domain checks ==="
rg -n -C3 'block\.chainid|chainid|EIP712|domain|toEthSignedMessageHash|recover|abi\.encode|keccak256|nonce' "$file"

echo
echo "=== deployed chains in this repo ==="
fd '^deployed_addresses\.json$' ignition/deployments -x rg -n 'ProviderBatchCallAndSponsor#ProviderBatchCallAndSponsor' {}

Repository: paycrest/contracts

Length of output: 211


🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Find the actual .sol source file (not directory)
echo "=== Locating ProviderBatchCallAndSponsor.sol ==="
find . -name "ProviderBatchCallAndSponsor.sol" -type f

# Also check the artifact structure
echo
echo "=== Artifact structure ==="
ls -la ignition/deployments/chain-137/artifacts/ 2>/dev/null | head -20 || echo "Directory check failed"

Repository: paycrest/contracts

Length of output: 407


🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Read the source file - try common locations
if [ -f "contracts/ProviderBatchCallAndSponsor.sol" ]; then
  echo "=== Source contract content ==="
  cat -n "contracts/ProviderBatchCallAndSponsor.sol" | head -200
fi

# Look for signature verification logic
echo
echo "=== Searching for keccak256, toEthSignedMessageHash, recover ==="
rg -n "keccak256|toEthSignedMessageHash|recover|\.chainid|block\.chainid|EIP712|DOMAIN_SEPARATOR" --type solidity 2>/dev/null | head -40

Repository: paycrest/contracts

Length of output: 4862


🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Check for all deployed_addresses.json files to see multi-chain deployment
echo "=== Checking multi-chain deployments ==="
find ignition/deployments -name "deployed_addresses.json" -type f 2>/dev/null | while read f; do
  echo "File: $f"
  rg -n "ProviderBatchCallAndSponsor" "$f" || echo "  (not found in this chain)"
done

echo
echo "=== Chain directories in deployments ==="
ls -d ignition/deployments/chain-* 2>/dev/null | head -10

Repository: paycrest/contracts

Length of output: 1550


Bind the batch signature to a chain-specific domain before deploying across chains.

The digest construction at line 51 of contracts/ProviderBatchCallAndSponsor.sol hashes only keccak256(abi.encodePacked(nonce, encodedCalls)), omitting block.chainid or any domain separator. Under EIP-7702, the same delegated account can execute this contract on multiple chains (currently deployed to 7 chains: Ethereum, Polygon, Arbitrum, Celo, BSC, Base, and testnet). A signature valid on one chain can be replayed on another if their nonce values match. Include block.chainid in the digest computation or switch to EIP-712 with a domain separator.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@ignition/deployments/chain-137/artifacts/ProviderBatchCallAndSponsor`#ProviderBatchCallAndSponsor.json
around lines 160 - 161, The signature digest currently built in
ProviderBatchCallAndSponsor (the keccak256(abi.encodePacked(nonce,
encodedCalls)) call that constructs the batch/signature digest) is missing a
chain-specific domain; update the digest computation to include block.chainid
(e.g. keccak256(abi.encodePacked(block.chainid, nonce, encodedCalls))) or
replace the ad-hoc digest with an EIP-712 domain-separated hash, so the digest
function (the method that returns the batch hash / verifies signatures) is bound
to a specific chain and prevents cross-chain replay.

Comment on lines +48 to +49
"project/contracts/ProviderBatchCallAndSponsor.sol": {
"content": "// SPDX-License-Identifier: MIT\npragma solidity ^0.8.18;\n\nimport \"@openzeppelin/contracts/utils/cryptography/ECDSA.sol\";\n\n/**\n * @title ProviderBatchCallAndSponsor\n *\n * When an EOA upgrades via EIP‑7702, it delegates to this implementation.\n * Off‑chain, the account signs a message authorizing a batch of calls. The message is the hash of:\n * keccak256(abi.encodePacked(nonce, calls))\n * The signature must be generated with the EOA’s private key so that, once upgraded, the recovered signer equals the account’s own address (i.e. address(this)).\n *\n * This contract provides just one way to execute a batch:\n * 1. With a signature: Any sponsor can submit the batch if it carries a valid signature.\n *\n * Replay protection is achieved by using a nonce that is included in the signed message.\n */\ncontract ProviderBatchCallAndSponsor {\n using ECDSA for bytes32;\n \n /// @notice A nonce used for replay protection.\n uint256 public nonce;\n\n /// @notice Represents a single call within a batch.\n struct Call {\n address to;\n uint256 value;\n bytes data;\n }\n\n /// @notice Emitted for every individual call executed.\n event CallExecuted(address indexed sender, address indexed to, uint256 value, bytes data);\n /// @notice Emitted when a full batch is executed.\n event BatchExecuted(uint256 indexed nonce, Call[] calls);\n\n /**\n * @notice Executes a batch of calls using an off–chain signature.\n * @param calls An array of Call structs containing destination, ETH value, and calldata.\n * @param signature The ECDSA signature over the current nonce and the call data.\n *\n * The signature must be produced off–chain by signing:\n * The signing key should be the account’s key (which becomes the smart account’s own identity after upgrade).\n */\n function execute(Call[] calldata calls, bytes calldata signature) external payable {\n // Compute the digest that the account was expected to sign.\n bytes memory encodedCalls;\n for (uint256 i = 0; i < calls.length; i++) {\n encodedCalls = abi.encodePacked(encodedCalls, calls[i].to, calls[i].value, calls[i].data);\n }\n bytes32 digest = keccak256(abi.encodePacked(nonce, encodedCalls));\n \n bytes32 ethSignedMessageHash = ECDSA.toEthSignedMessageHash(digest);\n\n // Recover the signer from the provided signature.\n address recovered = ECDSA.recover(ethSignedMessageHash, signature);\n require(recovered == address(this), \"Invalid signature\");\n\n _executeBatch(calls);\n }\n\n /**\n * @notice Executes a batch of calls directly.\n * @dev This contract doesnt authorized self execution.\n * @param calls An array of Call structs containing destination, ETH value, and calldata.\n */\n function execute(Call[] calldata calls) external payable {\n revert(\"Not implemented\"); // we don't expect this to be called directly\n }\n\n /**\n * @dev Internal function that handles batch execution and nonce incrementation.\n * @param calls An array of Call structs.\n */\n function _executeBatch(Call[] calldata calls) internal {\n uint256 currentNonce = nonce;\n nonce++; // Increment nonce to protect against replay attacks\n\n for (uint256 i = 0; i < calls.length; i++) {\n _executeCall(calls[i]);\n }\n\n emit BatchExecuted(currentNonce, calls);\n }\n\n /**\n * @dev Internal function to execute a single call.\n * @param callItem The Call struct containing destination, value, and calldata.\n */\n function _executeCall(Call calldata callItem) internal {\n (bool success,) = callItem.to.call{value: callItem.value}(callItem.data);\n require(success, \"Call reverted\");\n emit CallExecuted(msg.sender, callItem.to, callItem.value, callItem.data);\n }\n\n // Allow the contract to receive ETH (e.g. from DEX swaps or other transfers).\n fallback() external payable {}\n receive() external payable {}\n}"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

# First, check if the file exists and examine its structure around lines 48-49
head -60 ignition/deployments/chain-56/build-info/solc-0_8_20-ac89416d23dc73b4d7b4132cc0fb4bb5746fa9da.json | tail -20

Repository: paycrest/contracts

Length of output: 31385


🏁 Script executed:

# Get line count to understand file size
wc -l ignition/deployments/chain-56/build-info/solc-0_8_20-ac89416d23dc73b4d7b4132cc0fb4bb5746fa9da.json

Repository: paycrest/contracts

Length of output: 163


🏁 Script executed:

# Extract the embedded source code from the JSON using jq
jq -r '.sources."project/contracts/ProviderBatchCallAndSponsor.sol".content' ignition/deployments/chain-56/build-info/solc-0_8_20-ac89416d23dc73b4d7b4132cc0fb4bb5746fa9da.json > /tmp/contract_source.sol
cat /tmp/contract_source.sol

Repository: paycrest/contracts

Length of output: 66


🏁 Script executed:

# Find the original contract file in the repository
find . -name "ProviderBatchCallAndSponsor.sol" -type f | grep -v node_modules | head -5

Repository: paycrest/contracts

Length of output: 105


🏁 Script executed:

# Check if there's a contracts directory
find . -path "*/contracts/*.sol" -type f | grep -i provider | head -5

Repository: paycrest/contracts

Length of output: 105


🏁 Script executed:

# List project structure
ls -la project/contracts/ 2>/dev/null || echo "Directory not found"

Repository: paycrest/contracts

Length of output: 81


🏁 Script executed:

# Search for the contract in any location
find . -name "*ProviderBatchCallAndSponsor*" -type f 2>/dev/null

Repository: paycrest/contracts

Length of output: 1534


🏁 Script executed:

# Examine the source contract to confirm the security issues
cat -n ./contracts/ProviderBatchCallAndSponsor.sol

Repository: paycrest/contracts

Length of output: 4765


Use a canonical, chain-bound digest for signed batches.

The ProviderBatchCallAndSponsor contract at line 49–51 hashes abi.encodePacked(nonce, encodedCalls) after concatenating raw bytes payloads. This creates two critical vulnerabilities:

  1. Packing ambiguity: abi.encodePacked() with dynamic bytes lacks length prefixes, allowing different call sequences to produce identical digests. For example, [addr1, 100, 0xabcd] + [addr2, 200, 0xef] and [addr1, 100, 0xabcdef] + [addr2, 200, 0x] could encode identically, breaking signature commitment.

  2. Cross-chain replay: The digest is identical across all chains for the same nonce and calls. Since the contract deploys to seven chains (BSC, Ethereum, Polygon, Arbitrum, Base, Optimism, Celo), a signature valid on one chain works on all others at the same nonce.

Fix this before merging by using abi.encode() with block.chainid and address(this) to create chain-bound, unambiguous digests, then regenerate all deployment artifacts.

🔐 Suggested fix
-        bytes memory encodedCalls;
-        for (uint256 i = 0; i < calls.length; i++) {
-            encodedCalls = abi.encodePacked(encodedCalls, calls[i].to, calls[i].value, calls[i].data);
-        }
-        bytes32 digest = keccak256(abi.encodePacked(nonce, encodedCalls));
+        bytes32 digest = keccak256(
+            abi.encode(
+                block.chainid,
+                address(this),
+                nonce,
+                calls
+            )
+        );
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@ignition/deployments/chain-56/build-info/solc-0_8_20-ac89416d23dc73b4d7b4132cc0fb4bb5746fa9da.json`
around lines 48 - 49, The signed digest construction in
ProviderBatchCallAndSponsor::execute is unsafe: it uses abi.encodePacked(nonce,
encodedCalls) which is ambiguous for dynamic bytes and is chain-agnostic;
replace the digest creation to use abi.encode(...) and include chain and
contract context (e.g. encode(nonce, calls payload) via abi.encode, plus
block.chainid and address(this)) so the hash is unambiguous and chain-bound,
update the variable names digest/ethSignedMessageHash generation in execute to
use that new encoding, and after making this change regenerate all deployment
artifacts and ABI/bytecode outputs.

…ansfers

- Integrated SafeERC20 library to ensure safe token transfers in the Gateway contract.
- Replaced direct transfer calls with safeTransfer and safeTransferFrom methods to prevent potential issues with token transfers.
- Updated deployment addresses in the Ignition configuration for the Gateway contract.
- Adjusted README to reflect changes in deployment commands and network configurations.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants