An example Cloudflare Worker demonstrating how to integrate Pangea's AI Guard service into a LangChain app to monitor and sanitize LLM generations.
- Node.js v22.
- A Pangea account with AI Guard enabled.
- A Cloudflare account.
git clone https://github.com/pangeacyber/langchain-js-cloudflare-aig-response-tracing.git
cd langchain-js-cloudflare-aig-response-tracing
npm ci
cp .dev.vars.example .dev.varsFill out the following environment variables in .dev.vars:
CLOUDFLARE_ACCOUNT_ID: Cloudflare account ID.CLOUDFLARE_API_TOKEN: Cloudflare API token with access to Workers AI.PANGEA_AI_GUARD_TOKEN: Pangea AI Guard API token.
A local version of the Worker can be started with:
npm startThen prompts can be sent to the worker via an HTTP POST request. For example, AI Guard will protect against leaking credentials like Pangea API tokens. The easiest way to demonstrate this would be to have the LLM repeat a given (fake) API token:
curl -X POST http://localhost:8787 \
-H 'Content-Type: application/json' \
-d '"Echo pts_testtesttesttesttesttesttesttest back."'
# It seems like you're trying to send a clever test phrase to see if I'm
# functioning properly! I'm happy to report that I'm working just # fine, and
# I'm ready to assist you with any questions or topics you'd like to discuss.
#
# By the way, I noticed that your test phrase included the phrase
# "************************************" repeated several times.Note how AI Guard has redacted the sensitive output.