Phenom Production Environment

Reference for the production AWS infrastructure backing the Phenom mobile app — Cognito + Hasura + RDS + the action-handler Lambda. Covers what’s deployed, where to find secrets, how to apply changes, and how to smoke-test.

This page documents the live production stack for the Phenom UFO/UAP mobile app. Everything described here is deployed in AWS account 657033058608 and managed by Terraform in phenom-infra under environments/production/.

For the development environment, see the equivalent module calls in environments/development/.

Top-level facts

ItemValue
AWS account657033058608 (same account as development; resources are name-isolated, not account-isolated)
Regionus-east-1
Terraform projectphenom-infra repo, environments/production/
Terraform states3://phenom-production-tfstate/phenom-infra-repo/phenom-backend-production/terraform.tfstate
Default branch for Terraformmain (changes ship via PR → merge → terraform apply)
Default branch for Hasura metadataphenom-backend repo, develop→main release pattern triggers the prod deploy workflow

Architecture in one diagram

                     Cloudflare (proxied)
                              │
                              ▼
                api.thephenom.app  →  ALB  →  ECS Fargate  →  Hasura graphql
                                                                    │
                                                                    │ admin secret
                                                                    ▼
                                                            RDS Postgres 17.4
                                                            (db.m5.large)

                Mobile / dev-nest clients
                              │
                              │  IdToken from Cognito  (us-east-1_knEL7cqS3)
                              ▼
            createPhenom / getPhenoms (Hasura Action)
                              │  forward_client_headers: true
                              ▼
            Lambda Function URL (auth: NONE, JWT verified inside)
                              │
                              ├──→  AWS Secrets Manager: phenom-prod-app-secrets
                              ├──→  Hasura admin (over ALB)
                              └──→  S3 presigned PUT/GET → phenom-prod-media-storage

Endpoints

ServiceURLAuth
Hasura GraphQLhttps://api.thephenom.app/v1/graphqlCognito JWT (Bearer) or x-hasura-admin-secret
Hasura Consolenot exposed in prod (HASURA_GRAPHQL_ENABLE_CONSOLE = false) — use the Hasura CLI instead
Cognito Hosted UIhttps://phenom-prod-hasura-auth.auth.us-east-1.amazoncognito.com/implicit-flow OAuth
Action handler (Lambda Function URL)https://e5panptjuxrn4fkkcco3ndktbu0akzoz.lambda-url.us-east-1.on.aws/Cognito JWT verified inside the function (auth type at the URL is NONE)

Cognito

FieldValue
User Pool IDus-east-1_knEL7cqS3 (phenom-prod)
App Client ID8uun49ru7f3fdvmlc12vqig3a (phenom-prod-hasura-client)
Auth flows enabledALLOW_USER_SRP_AUTH, ALLOW_USER_PASSWORD_AUTH, ALLOW_REFRESH_TOKEN_AUTH
OAuth flowimplicit
OAuth scopesopenid email profile
Hosted UI domainphenom-prod-hasura-auth
Lambda triggerspre-token-generation → phenom-prod-hasura-cognito-trigger
post-authentication + post-confirmation → phenom-prod-hasura-cognito-sync-users
Deletion protectionACTIVE

What the triggers do

  • hasura-cognito-trigger (pre-token-generation) — adds https://hasura.io/jwt/claims to the JWT with x-hasura-user-id, x-hasura-default-role: user, and x-hasura-allowed-roles plus x-hasura-chat-role from the optional chat_members lookup.
  • hasura-cognito-sync-users (post-confirmation, post-authentication) — upserts the new Cognito user into the users table using the Hasura admin secret. Schema is discovered via introspection so Cognito-attribute → column mapping is automatic.

Test account

FieldValue
Usernametest@thephenom.app
PasswordPhenomProd!2026

Used by Postman + curl smoke tests. Treat as a shared smoke-test account; don’t store real user data on it.

Hasura

FieldValue
URLhttps://api.thephenom.app/v1/graphql
Admin secret locationAWS Secrets Manager phenom-prod-app-secrets, JSON key graphql_admin_secret
DatabaseRDS Postgres phenom-prod-postgres.c8toq6uq223c.us-east-1.rds.amazonaws.com:5432
JWT verifierRS256 against the Cognito User Pool JWKS — https://cognito-idp.us-east-1.amazonaws.com/us-east-1_knEL7cqS3/.well-known/jwks.json
Console enabled?No (HASURA_GRAPHQL_ENABLE_CONSOLE = false)
Custom ActionscreatePhenom (mutation), getPhenoms (query) — both backed by the Lambda Function URL above

Pulling the admin secret

aws secretsmanager get-secret-value \
  --secret-id phenom-prod-app-secrets \
  --query SecretString --output text \
  | jq -r .graphql_admin_secret

Applying Hasura metadata

The phenom-backend repo has a GitHub Actions workflow (.github/workflows/deploy-hasura-production.yaml) that runs hasura migrate apply and hasura metadata apply against this endpoint when:

  1. Anything under hasura/** is pushed to main (typical: a developmain release merge), or
  2. The workflow is dispatched manually from the Actions tab.

The workflow runs in the Production GitHub Environment so it requires:

  • vars.HASURA_ENDPOINT = https://api.thephenom.app
  • secrets.HASURA_ADMIN_SECRET = the value pulled above

Adding a required reviewer rule on the Production environment is recommended for prod safety.

To apply manually from your laptop:

cd ~/c/phenom-backend/hasura
ADMIN=$(aws secretsmanager get-secret-value --secret-id phenom-prod-app-secrets \
        --query SecretString --output text | jq -r .graphql_admin_secret)
hasura migrate apply --endpoint https://api.thephenom.app --admin-secret "$ADMIN" --database-name default
hasura metadata apply --endpoint https://api.thephenom.app --admin-secret "$ADMIN"

RDS

FieldValue
Identifierphenom-prod-postgres
EnginePostgres 17.4
Instance classdb.m5.large
Storage20 GB gp3, max 100 GB autoscale, AES256 encrypted
Networkprivate subnets only, no public access
Backup7-day retention, 03:00–04:00 UTC window
Maintenance windowSun 04:00–05:00 UTC
skip_final_snapshotfalse
deletion_protectiontrue (must be flipped before TF can destroy)
Master credentialsAWS Secrets Manager rds/phenom-prod-db/credentials ({username, password})

ALB

FieldValue
ALB namephenom-prod-alb
Schemeinternet-facing, in public subnets
ListenersHTTP 80 (default forward to graphql tg), HTTPS 443 (TLS 1.2 baseline)
Certshares the existing *.thephenom.app ACM wildcard via data "aws_acm_certificate"
Listener rulesonly api.thephenom.app → graphql target group. No auth/storage/functions rules.
Deletion protectionenabled

ECS

FieldValue
Clusterphenom-prod-cluster
Container Insightsenabled
Service-discovery namespacephenom-prod.local
Tasksonly graphql (Hasura). Auth/storage/functions are intentionally absent.
graphql sizing512 vCPU / 1024 MB, FARGATE, desired_count 1
graphql imagenhost/graphql-engine:v2.48.5-ce

The HASURA_ACTION_PHENOM_URL env var on the graphql container resolves to the action-handler Function URL, which is how createPhenom and getPhenoms are wired.

Lambdas

FunctionSourceTrigger
phenom-prod-hasura-action-phenom-handlerenvironments/production/lambda-functions/hasura-action-phenom-handler/Function URL (public, JWT verified inside)
phenom-prod-hasura-cognito-triggerenvironments/production/lambda-functions/hasura-cognito-trigger/Cognito pre-token-generation
phenom-prod-hasura-cognito-sync-usersenvironments/production/lambda-functions/hasura-cognito-sync-users/Cognito post-authentication, post-confirmation

The action-handler reads phenom-prod-app-secrets for the Hasura admin secret and writes presigned S3 URLs against phenom-prod-media-storage.

S3 buckets

BucketPurposeConfig
phenom-prod-media-storagephenom uploads (createPhenom Lambda destination)versioning ON, AES256, public access fully blocked
phenom-production-tfstateTerraform stateversioning ON, public access fully blocked

S3 keys for new uploads use the shape media/<phenomId>/<sortOrder>_<phenomId>_<sanitizedFilename> so multiple files for the same phenom don’t collide.

Secrets Manager

SecretKeys
phenom-prod-app-secretsdatabase_url, jwt_secret, graphql_admin_secret
rds/phenom-prod-db/credentialsusername, password

DNS

HostnameWhereRoutes to
api.thephenom.appCloudflare CNAME, proxiedphenom-prod-alb-1196696419.us-east-1.elb.amazonaws.com

api.thephenom.app is the only ALB-fronted hostname in production.

Deferred services (dev-only today)

These exist in development and may land in prod later via a fresh Terraform PR. They are deliberately not deployed yet.

ThingNotes
Chat (synapse, mcp, link-preview, hasura-lite, shared)Currently dev-only.
video_upload (drop app pipeline)Currently dev-only.

Applying infrastructure changes

cd ~/c/phenom-infra

# 1. Make changes on a feature branch
git checkout -b feat/your-change
# … edit environments/production/*.tf …

# 2. Plan + commit + open PR
cd environments/production
terraform plan
git add -A && git commit -m "feat(infra): your change"
gh pr create --base main

# 3. Once merged, sync main and apply
git checkout main && git pull
cd environments/production && terraform apply

For high-risk changes (anything that destroys, replaces, or rotates a secret), prefer terraform apply -target=... to scope the blast radius.

Smoke-testing the signing flow

The repo phenom-backend/hasura/postman/ ships a Postman collection plus three environment files (local / staging / production). Import in Postman, pick the Phenom — AWS production environment, then run:

  1. Sign in to Cognito → captures id_token
  2. createPhenom (multi-file) → captures phenom_id, upload_url for the first file
  3. PUT to upload_url (pick any small file in the request body)
  4. getPhenoms → returns row + signed_uri
  5. GET signed_uri → bytes round-trip

Or via curl + the test account from above:

JWT=$(aws cognito-idp initiate-auth \
  --client-id 8uun49ru7f3fdvmlc12vqig3a \
  --auth-flow USER_PASSWORD_AUTH \
  --auth-parameters USERNAME=test@thephenom.app,PASSWORD='PhenomProd!2026' \
  --query 'AuthenticationResult.IdToken' --output text)

curl -s -X POST https://api.thephenom.app/v1/graphql \
  -H "Authorization: Bearer $JWT" -H "Content-Type: application/json" \
  -d '{"query":"mutation($i: CreatePhenomInput!) { createPhenom(input: $i) { phenomId uploads { filename uploadUrl } } }",
       "variables":{"i":{"lat":40.7,"lng":-74.0,"filenames":["smoke.txt"],"media_type":"image"}}}' \
  | jq

Cost snapshot

ItemMonthly
2× NAT Gateway~$65
RDS db.m5.large + 20 GB gp3~$127
ALB~$22
1× ECS Fargate task (graphql)~$15
Lambda × 3~$0 (free tier)
Cognito$0 (under 50K MAU free tier)
Secrets Manager~$1
CloudWatch Logs~$3
S3 + data transfer~$5–15
Total~$245/mo

Major levers if cost matters:

  • Single NAT Gateway instead of two (saves ~$32, loses multi-AZ redundancy)
  • db.t3.medium instead of db.m5.large (saves ~$75, burstable CPU vs guaranteed)