Phenom Production Environment
This page documents the live production stack for the Phenom UFO/UAP mobile app. Everything described here is deployed in AWS account 657033058608 and managed by Terraform in phenom-infra under environments/production/.
For the development environment, see the equivalent module calls in environments/development/.
Top-level facts
| Item | Value |
|---|---|
| AWS account | 657033058608 (same account as development; resources are name-isolated, not account-isolated) |
| Region | us-east-1 |
| Terraform project | phenom-infra repo, environments/production/ |
| Terraform state | s3://phenom-production-tfstate/phenom-infra-repo/phenom-backend-production/terraform.tfstate |
| Default branch for Terraform | main (changes ship via PR → merge → terraform apply) |
| Default branch for Hasura metadata | phenom-backend repo, develop→main release pattern triggers the prod deploy workflow |
Architecture in one diagram
Cloudflare (proxied)
│
▼
api.thephenom.app → ALB → ECS Fargate → Hasura graphql
│
│ admin secret
▼
RDS Postgres 17.4
(db.m5.large)
Mobile / dev-nest clients
│
│ IdToken from Cognito (us-east-1_knEL7cqS3)
▼
createPhenom / getPhenoms (Hasura Action)
│ forward_client_headers: true
▼
Lambda Function URL (auth: NONE, JWT verified inside)
│
├──→ AWS Secrets Manager: phenom-prod-app-secrets
├──→ Hasura admin (over ALB)
└──→ S3 presigned PUT/GET → phenom-prod-media-storage
Endpoints
| Service | URL | Auth |
|---|---|---|
| Hasura GraphQL | https://api.thephenom.app/v1/graphql | Cognito JWT (Bearer) or x-hasura-admin-secret |
| Hasura Console | not exposed in prod (HASURA_GRAPHQL_ENABLE_CONSOLE = false) — use the Hasura CLI instead | |
| Cognito Hosted UI | https://phenom-prod-hasura-auth.auth.us-east-1.amazoncognito.com/ | implicit-flow OAuth |
| Action handler (Lambda Function URL) | https://e5panptjuxrn4fkkcco3ndktbu0akzoz.lambda-url.us-east-1.on.aws/ | Cognito JWT verified inside the function (auth type at the URL is NONE) |
Cognito
| Field | Value |
|---|---|
| User Pool ID | us-east-1_knEL7cqS3 (phenom-prod) |
| App Client ID | 8uun49ru7f3fdvmlc12vqig3a (phenom-prod-hasura-client) |
| Auth flows enabled | ALLOW_USER_SRP_AUTH, ALLOW_USER_PASSWORD_AUTH, ALLOW_REFRESH_TOKEN_AUTH |
| OAuth flow | implicit |
| OAuth scopes | openid email profile |
| Hosted UI domain | phenom-prod-hasura-auth |
| Lambda triggers | pre-token-generation → phenom-prod-hasura-cognito-triggerpost-authentication + post-confirmation → phenom-prod-hasura-cognito-sync-users |
| Deletion protection | ACTIVE |
What the triggers do
hasura-cognito-trigger(pre-token-generation) — addshttps://hasura.io/jwt/claimsto the JWT withx-hasura-user-id,x-hasura-default-role: user, andx-hasura-allowed-rolesplusx-hasura-chat-rolefrom the optionalchat_memberslookup.hasura-cognito-sync-users(post-confirmation, post-authentication) — upserts the new Cognito user into theuserstable using the Hasura admin secret. Schema is discovered via introspection so Cognito-attribute → column mapping is automatic.
Test account
| Field | Value |
|---|---|
| Username | test@thephenom.app |
| Password | PhenomProd!2026 |
Used by Postman + curl smoke tests. Treat as a shared smoke-test account; don’t store real user data on it.
Hasura
| Field | Value |
|---|---|
| URL | https://api.thephenom.app/v1/graphql |
| Admin secret location | AWS Secrets Manager phenom-prod-app-secrets, JSON key graphql_admin_secret |
| Database | RDS Postgres phenom-prod-postgres.c8toq6uq223c.us-east-1.rds.amazonaws.com:5432 |
| JWT verifier | RS256 against the Cognito User Pool JWKS — https://cognito-idp.us-east-1.amazonaws.com/us-east-1_knEL7cqS3/.well-known/jwks.json |
| Console enabled? | No (HASURA_GRAPHQL_ENABLE_CONSOLE = false) |
| Custom Actions | createPhenom (mutation), getPhenoms (query) — both backed by the Lambda Function URL above |
Pulling the admin secret
aws secretsmanager get-secret-value \
--secret-id phenom-prod-app-secrets \
--query SecretString --output text \
| jq -r .graphql_admin_secret
Applying Hasura metadata
The phenom-backend repo has a GitHub Actions workflow (.github/workflows/deploy-hasura-production.yaml) that runs hasura migrate apply and hasura metadata apply against this endpoint when:
- Anything under
hasura/**is pushed tomain(typical: adevelop→mainrelease merge), or - The workflow is dispatched manually from the Actions tab.
The workflow runs in the Production GitHub Environment so it requires:
vars.HASURA_ENDPOINT=https://api.thephenom.appsecrets.HASURA_ADMIN_SECRET= the value pulled above
Adding a required reviewer rule on the Production environment is recommended for prod safety.
To apply manually from your laptop:
cd ~/c/phenom-backend/hasura
ADMIN=$(aws secretsmanager get-secret-value --secret-id phenom-prod-app-secrets \
--query SecretString --output text | jq -r .graphql_admin_secret)
hasura migrate apply --endpoint https://api.thephenom.app --admin-secret "$ADMIN" --database-name default
hasura metadata apply --endpoint https://api.thephenom.app --admin-secret "$ADMIN"
RDS
| Field | Value |
|---|---|
| Identifier | phenom-prod-postgres |
| Engine | Postgres 17.4 |
| Instance class | db.m5.large |
| Storage | 20 GB gp3, max 100 GB autoscale, AES256 encrypted |
| Network | private subnets only, no public access |
| Backup | 7-day retention, 03:00–04:00 UTC window |
| Maintenance window | Sun 04:00–05:00 UTC |
skip_final_snapshot | false |
deletion_protection | true (must be flipped before TF can destroy) |
| Master credentials | AWS Secrets Manager rds/phenom-prod-db/credentials ({username, password}) |
ALB
| Field | Value |
|---|---|
| ALB name | phenom-prod-alb |
| Scheme | internet-facing, in public subnets |
| Listeners | HTTP 80 (default forward to graphql tg), HTTPS 443 (TLS 1.2 baseline) |
| Cert | shares the existing *.thephenom.app ACM wildcard via data "aws_acm_certificate" |
| Listener rules | only api.thephenom.app → graphql target group. No auth/storage/functions rules. |
| Deletion protection | enabled |
ECS
| Field | Value |
|---|---|
| Cluster | phenom-prod-cluster |
| Container Insights | enabled |
| Service-discovery namespace | phenom-prod.local |
| Tasks | only graphql (Hasura). Auth/storage/functions are intentionally absent. |
| graphql sizing | 512 vCPU / 1024 MB, FARGATE, desired_count 1 |
| graphql image | nhost/graphql-engine:v2.48.5-ce |
The HASURA_ACTION_PHENOM_URL env var on the graphql container resolves to the action-handler Function URL, which is how createPhenom and getPhenoms are wired.
Lambdas
| Function | Source | Trigger |
|---|---|---|
phenom-prod-hasura-action-phenom-handler | environments/production/lambda-functions/hasura-action-phenom-handler/ | Function URL (public, JWT verified inside) |
phenom-prod-hasura-cognito-trigger | environments/production/lambda-functions/hasura-cognito-trigger/ | Cognito pre-token-generation |
phenom-prod-hasura-cognito-sync-users | environments/production/lambda-functions/hasura-cognito-sync-users/ | Cognito post-authentication, post-confirmation |
The action-handler reads phenom-prod-app-secrets for the Hasura admin secret and writes presigned S3 URLs against phenom-prod-media-storage.
S3 buckets
| Bucket | Purpose | Config |
|---|---|---|
phenom-prod-media-storage | phenom uploads (createPhenom Lambda destination) | versioning ON, AES256, public access fully blocked |
phenom-production-tfstate | Terraform state | versioning ON, public access fully blocked |
S3 keys for new uploads use the shape media/<phenomId>/<sortOrder>_<phenomId>_<sanitizedFilename> so multiple files for the same phenom don’t collide.
Secrets Manager
| Secret | Keys |
|---|---|
phenom-prod-app-secrets | database_url, jwt_secret, graphql_admin_secret |
rds/phenom-prod-db/credentials | username, password |
DNS
| Hostname | Where | Routes to |
|---|---|---|
api.thephenom.app | Cloudflare CNAME, proxied | phenom-prod-alb-1196696419.us-east-1.elb.amazonaws.com |
api.thephenom.app is the only ALB-fronted hostname in production.
Deferred services (dev-only today)
These exist in development and may land in prod later via a fresh Terraform PR. They are deliberately not deployed yet.
| Thing | Notes |
|---|---|
| Chat (synapse, mcp, link-preview, hasura-lite, shared) | Currently dev-only. |
video_upload (drop app pipeline) | Currently dev-only. |
Applying infrastructure changes
cd ~/c/phenom-infra
# 1. Make changes on a feature branch
git checkout -b feat/your-change
# … edit environments/production/*.tf …
# 2. Plan + commit + open PR
cd environments/production
terraform plan
git add -A && git commit -m "feat(infra): your change"
gh pr create --base main
# 3. Once merged, sync main and apply
git checkout main && git pull
cd environments/production && terraform apply
For high-risk changes (anything that destroys, replaces, or rotates a secret), prefer terraform apply -target=... to scope the blast radius.
Smoke-testing the signing flow
The repo phenom-backend/hasura/postman/ ships a Postman collection plus three environment files (local / staging / production). Import in Postman, pick the Phenom — AWS production environment, then run:
- Sign in to Cognito → captures
id_token createPhenom(multi-file) → capturesphenom_id,upload_urlfor the first file- PUT to
upload_url(pick any small file in the request body) getPhenoms→ returns row + signed_uri- GET
signed_uri→ bytes round-trip
Or via curl + the test account from above:
JWT=$(aws cognito-idp initiate-auth \
--client-id 8uun49ru7f3fdvmlc12vqig3a \
--auth-flow USER_PASSWORD_AUTH \
--auth-parameters USERNAME=test@thephenom.app,PASSWORD='PhenomProd!2026' \
--query 'AuthenticationResult.IdToken' --output text)
curl -s -X POST https://api.thephenom.app/v1/graphql \
-H "Authorization: Bearer $JWT" -H "Content-Type: application/json" \
-d '{"query":"mutation($i: CreatePhenomInput!) { createPhenom(input: $i) { phenomId uploads { filename uploadUrl } } }",
"variables":{"i":{"lat":40.7,"lng":-74.0,"filenames":["smoke.txt"],"media_type":"image"}}}' \
| jq
Cost snapshot
| Item | Monthly |
|---|---|
| 2× NAT Gateway | ~$65 |
| RDS db.m5.large + 20 GB gp3 | ~$127 |
| ALB | ~$22 |
| 1× ECS Fargate task (graphql) | ~$15 |
| Lambda × 3 | ~$0 (free tier) |
| Cognito | $0 (under 50K MAU free tier) |
| Secrets Manager | ~$1 |
| CloudWatch Logs | ~$3 |
| S3 + data transfer | ~$5–15 |
| Total | ~$245/mo |
Major levers if cost matters:
- Single NAT Gateway instead of two (saves ~$32, loses multi-AZ redundancy)
db.t3.mediuminstead ofdb.m5.large(saves ~$75, burstable CPU vs guaranteed)
Feedback
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve.