This is an implementation of the OpenTrustClaims schema from https://github.com/blueskyCommunity/OpenTrustClaims/blob/main/open_trust_claim.yaml, and is the backend powering https://live.linkedtrust.us and dev server
trust_claim_backend is a Node application for adding Claims, and for presenting Nodes and Edges derived from claims in it
To generate Nodes and Edges from Claims it is also necessary to run trust-claim-data-pipeline
Interactive API documentation is available at /api/docs when the server is running.
- Development: http://localhost:3000/api/docs
- Production: https://live.linkedtrust.us/api/docs
The documentation includes:
- All endpoints (legacy v3 and modern v4)
- Request/response schemas
- Authentication details
- Try-it-out functionality
See SWAGGER_SETUP.md for more details.
Claim: a signed set of structured data with the raw claim or attestation, often signed on front end by the user's DID Node: an entity that a claim is about. This is created in the app as a view of what a claim is about. Edge: a representation of a claim that relates to a Node or connects two Nodes. Created in the app as a view of a claim.
The frontend is fully working with Jenkins CI/CD Integration The logs can be found on jenkins last build And for Auth Details to the pipeline, kindly refer to vault jenkins logins, this creds would help you to gain access into the CI/CD pipeline and figure out why the test didn't run as it should, and also review the console outputs to figure out what the issue might be.
For SSH Access into the dev server, kindly refer to this creds in the vault dev server ssh creds this would help you ssh into the dev serverm while inside, the files would be in the /data/trust_claim_backend directory and configured with nginx
NB: The production version of this is available on live.linkedtrust.us
Running the application in docker is only important if you don't want to set up postgresql server in your pc. If you choose to not use docker in development, then set the postgresql db url and env variables in .env file. Check Env variables. section.
Then running below command is sufficient.
npm run devTo run with docker, firstly, have all the env variables in .env and .env.dev file in our project root. Check Env variables for help with env variables.
Then, build the project -
npx prisma generate # the first time
npm run buildYou will need docker installed in your computer. For help with installation, ask in slack.
Build the docker containers and run it. Two options are available
docker compose --profile prod upcd ..
git clone [email protected]:Whats-Cookin/trust-claim-data-pipeline.git
cd trust_claim_backend
# Run in development mode
docker compose --profile dev up --watch
# Run in production mode
# docker compose --profile prod upTip
Ask in Slack for the claim.backup file to populate the database.
Add the file to the parent directory of the project, uncomment the - ../claim.backup:/claim.backup
line in docker-compose.yml and rebuild the image docker compose build.
Jump in the postgres container with docker exec -it postgres bash and run pg_restore -x --no-owner -U postgres -d claim claim.backup to populate the database.
Once the docker containers are running, install the packages and run the migration
npm i
npm run migrate:devThen, while developing, run
npm run dev:watchTo stop and delete the containers
docker compose downFor one way hashing and comparing, jwt needs 2 environment variables. Check Env variables section for the required variables.
Database is handled with the help of prisma orm.
_ NOTE NOTE NOTE : the migrations in prod server are currently NOT working automatically 8/1/2024 _ _ the migration in the prisma/migrations folder was applied manually _
If migration is not for docker container then run
npx prisma migrate devFor docker container
npx dotenv -e .env.dev -- npx prisma migrate dev --name {name of the migration}To match production optimizations, run these commands in your local PostgreSQL database:
Enable pg_trgm Extension (Required for GIN Indexes):
Run thos command in your local PostgreSQL database:
CREATE EXTENSION IF NOT EXISTS pg_trgm;Create GIN Indexes on Node Table: For name column:
CREATE INDEX idx_name ON "Node" USING GIN (name gin_trgm_ops);For nodeUri column:
CREATE INDEX idx_nodeUri ON "Node" USING GIN ("nodeUri" gin_trgm_ops);For descrip column:
CREATE INDEX idx_descrip ON "Node" USING GIN ("descrip" gin_trgm_ops);These steps ensure your local DB mirrors production's text search optimizations.
If not using docker containers
npx prisma studioIf using docker containers
npm run prisma:studioAfter running this command prisma studio opens in port 5555.
Database seeding happens in two ways with Prisma: manually with prisma db seed and automatically in prisma migrate dev.
Run
npx prisma db seedor
npm i
prisma migrate devWhen you want to use prisma migrate dev without seeding, you can pass the --skip-seed flag.
Create a .env file in project root. If running with docker an additional .env.dev file is needed. Refer to below example for env variables:
PORT=9000
DATABASE_URL="postgresql://postgres:postgres@postgres:5432/claim"
ACCESS_SECRET='...'
REFRESH_SECRET='...'
AWS_ACCESS_KEY_ID='...'
AWS_SECRET_ACCESS_KEY='...'
AWS_STORAGE_BUCKET_NAME='...'
AWS_S3_REGION_NAME='...'
DATA_PIPELINE_MS='http://trust-claim-data-pipeline:5000'In .env.dev, change DATABASE_URL like below, everything else can be exactly like .env.
DATABASE_URL="postgresql://postgres:postgres@localhost:5432/claim"Value for ACCESS_SECRET and REFRESH_SECRET can be anything.
SSH into the server with the private key. If you don't have the key, ask for it in slack.
cd /data/trust_claim_backendinspect the running file
pm2 status index
pm2 logs indexnvm use 20
cd /data/trust_claim_backend
git pull
npm iIf there is any database migration, it is a good idea to backup the database, otherwise you may skip this step.
sudo su postgres
pg_dump claim > /postgres/backup_filename.sqlThen run the following 2 commands to generate artifacts and deploy migrations [This is already implemented in the CI/CD pipeline, but for local, it's needed].
npx prisma generate
npx prisma migrate deployThen, building the project is enough, because pm2 is watching for changes.
npm run buildNOTE: Run this ONLY when the server is down
pm2 start trust_claim_backend --watchTo completely reset the pm2 process use
pm2 delete trust_claim_backend
pm2 start build/index.js --name trust_claim_backend --cwd /data/trust_claim_backend --interpreter /data/home/ubuntu/.nvm/versions/node/v20.18.1/bin/node
pm2 save
Logs are in /data/home/ubuntu/.pm2/logs
Can also view with pm2 logs trust_claim_backend
To see all about the pm2 process use
PM2_HOME=/data/home/ubuntu/.pm2 /data/home/ubuntu/.nvm/versions/node/v16.15.1/bin/pm2 describe indexNginx config is located here - /etc/nginx/sites-available/trustclaims.whatscookin.us. To change the config -
sudo vim /etc/nginx/sites-available/trustclaims.whatscookin.usAfter changing Nginx config, test it using -
sudo nginx -tThen reload nginx service
sudo systemctl reload nginx.service`get docker id
docker pscopy db into your docker
docker cp <path>/trustclaims.sql <id>:/tmp/dump_filerestore the db file
docker exec -it <id> psql -U postgres -d claim -f /tmp/dump_fileAlternate instructions
Run
docker run -d -p 5432:5432 -e POSTGRES_PASSWORD=postgres --name postgres-db postgres
ensure you have a .env file
PORT=9000
DATABASE_URL="postgresql://postgres:postgres@localhost:5432/claim"
ACCESS_SECRET=**add_your_secret_keys_here**
REFRESH_SECRET=**add_your_secret_keys_here**
then run
npm run dev
OR
npm run inspect. to be able to connect with remote debugger
OR
run from within an IDE such as webstorm with simple configuration such as
you may also have to copy .env to .env.dev
and run
npm run migrate:dev
to set up the initial database
The backend supports multiple OAuth applications via the auth_apps table - see prisma/protected/app_snippets.sql for credential management.
The backend can accept OAuth tokens from multiple client applications (e.g., Certify, Talent, etc.) using two approaches:
All frontends use the same Google/LinkedIn OAuth Client ID:
# Backend .env
GOOGLE_CLIENT_ID=shared-client-id
GOOGLE_CLIENT_SECRET=shared-secretAll frontends must use these same credentials.
Backend can accept tokens from multiple OAuth apps:
# Backend .env
GOOGLE_CLIENT_ID=primary-client-id
GOOGLE_CLIENT_SECRET=primary-secret
ALLOWED_CLIENT_ID_2=secondary-client-id # For additional frontend appsThe backend's authApi.ts validates tokens against both client IDs (see line 45).
TODO for Future: Build an admin interface to manage OAuth client IDs dynamically via the auth_apps table, allowing registration of new frontend applications without environment variable changes.
