Maersk Shipment Platform — Port of Antwerp

NxtPort Certified
Pick-up
Integration

A distributed microservice architecture replacing legacy EDI/COREOR messaging with modern API-based container release automation for Belgian ports. Mandatory digital infrastructure under the Port Police Regulations — ensuring every supply chain partner is connected to NxtPort CPu.

6Microservices
5Kafka Topics
9Belgian Terminals
2Ports Served
Explore the system

Why this system exists

NxtPort Certified Pick-up (CPu) is mandatory digital infrastructure for the Port of Antwerp-Bruges. All supply chain partners in the container release process must be connected.

The EDI → API Transformation

The previous container release integration relied on EDI/COREOR messages — a legacy batch-oriented protocol that was rigid, hard to debug, and incompatible with NxtPort's modern API requirements. This system replaces that with a real-time, event-driven API architecture.

GCSS (Global Container Shipping System) emits EBDD events when cargo release tasks change state. These events flow through our 6-service pipeline, are validated against Belgian port rules, transformed into NxtPort-compatible formats, and delivered via OAuth2-secured API calls.

⊘ Legacy EDI/COREOR

  • Batch-oriented messaging protocol
  • No real-time processing
  • Difficult error handling and debugging
  • Rigid data format — hard to evolve
  • No OAuth2 / modern authentication
  • Manual retry and reconciliation

✓ New API-based CPu Integration

  • Real-time event-driven architecture
  • Sub-second event processing
  • Automated retry with exponential backoff
  • Schema-evolved Avro contracts
  • OAuth2 + HMAC secured API delivery
  • Full audit trail and observability
01

GCSS EBDD Events

Event Based Data Distribution — external systems subscribe to GCSS Activity Plan events, receiving shipment and Transport Document data on an MQ queue after each event occurs.

02

ACR Task Lifecycle

ACR tasks can be opened, closed, re-opened, and re-closed. Critical data amendments (vessel, port, release party) trigger re-open. Non-critical changes skip COREOR.

03

Submit & Delete Only

The NxtPort COREOR integration uses only "submit" and "delete" operations. ACR_Closed triggers submit, ACR re-open triggers delete (since the API can't know what changed).

04

Legal Compliance

The CPu framework is mandated by Port Police Regulations. All supply chain partners must participate — this is critical digital infrastructure for the port.


The journey of a container release

Click any node to explore what happens inside. Hit Play to watch the story unfold automatically.

Click any node — or press Play to launch a data packet through the pipeline
🏭
GCSS
Source
publishes
TPDoc Topic
Kafka · shared
consumes
⚙️
COREOR Engine
Service 1
produces
DBE Topic
Kafka · dedicated
DB Engine consumes
🧠
Database Engine
Service 2
produces
Sender Topic
Kafka
consumes
📡
Notif. Sender
Service 3
HTTPS · HMAC
🏗
NxtPort CPu
External
🗄️
PostgreSQL
JSONB ledger
persists events · reads back for dependency chains
🚢
Vessels API
Reference
fetches IMO & call sign · Caffeine 10-min
📜
Audit Topic
Kafka · audits
every delivery attempt logged for compliance
🔁
Retry Topic
Kafka · retries
5xx & transient failures land here for later replay
async webhook callback (hours / days later)
🔗
Integration API
Service 4
classifies
🗂️
5 Scope Types
Routing
writes status
🗄️
PostgreSQL
same ledger
creates task
📋
GCSS Tasks
Manual follow-up
Retry & reschedule loop
🔁
Retry Topic
Kafka
consumes
🧰
Retry Consumer
Service 5
persists
📇
notification_retries
Postgres table
picks & re-sends
⏱️
Status Scheduler
Service 6
The Status Scheduler re-publishes due retries onto the Sender Topic, closing the loop. Meanwhile the Audit Topic records every attempt, and the Integration API reconciles the final verdict from NxtPort back into the same PostgreSQL ledger.


publishes to
2
KAFKA TOPICebddTransportDocument.v2

⚡ Shared Kafka Cluster

Avro-serialized TPDoc with nested tag hierarchy (tag0000→tag2100→tag2600→tag2620/2630). SASL_SSL + SCRAM-SHA-512 auth. Confluent Schema Registry.

consumes from
3
SERVICE 1consumes → filters → extracts → maps

⚙️ external-coreor-engine

3 concurrent Kafka consumer threads filter for ACR_Open / ACR_Closed only. Runs NxtPort pre-check (Belgian terminal validation). PayloadVariableExtractor extracts 30+ variables. CoreorMapper transforms to EquipmentReleaseEvent with RKST→APCS terminal codes and embedded NxtPort OAuth2 credentials.

🔍 NotificationEventListener — filter ACR events
🇧🇪 PayloadVariableExtractor — Belgian terminal pre-check
🔄 CoreorMapper — map TransportDocument → EquipmentReleaseEvent
📤 EventDelegator — produce to Kafka
produces to
4
KAFKA TOPICcoreordatabaseengine.v1

⚡ Dedicated Kafka Cluster

EquipmentReleaseEvent (Avro) — event metadata, equipment array with release details, vessel info, and embedded NxtPort OAuth2 credentials. Separate cluster from GCSS with its own Schema Registry.

consumes from
5
SERVICE 2consumes → enriches → persists → produces

🧠 nxtport-database-engine

The business logic brain. Consumes EquipmentReleaseEvent, enriches with external API data, applies complex ACR scenario logic, persists to PostgreSQL, and produces downstream.

🚢 Calls Vessels API — fetches vessel IMO number & call sign (Caffeine cached, 10min TTL)
📋 ACR Scenario Logic — new-closed (SUBMIT), re-closed (DELETE+SUBMIT), re-open (DELETE with dependency)
🗄️ Persists to PostgreSQLnxtport_coreor table with JSONB event payload, status tracking, dependency chains
📧 SendGrid Email — async failure notifications on error scenarios
📝 GCSS Task API — creates manual intervention tasks on critical errors (OAuth2 auth, cached 55min)
📤 Produces to Kafka — ExternalEventNotification to sender topic
produces to
6
KAFKA TOPICcoreorengine.v1

⚡ Dedicated Kafka Cluster — Sender Topic

ExternalEventNotification (Avro) — Cloud Event JSON body, callback URL, subscription metadata, and full authentication credentials (OAuth2 + HMAC secret).

consumes from
7
SERVICE 3consumes → authenticates → signs → delivers

📡 external-nxtport-notification-sender

Reactive WebFlux service. Consumes notification, resolves auth, and delivers to NxtPort.

🔑 Resolves OAuth2 Token — per-subscriber token from customer auth server (cached 22min per subscription)
🔏 HMAC Signature — generates message signature for integrity verification
🌐 HTTP POST to NxtPort — delivers Cloud Event payload with Bearer token + HMAC
🗄️ Updates PostgreSQL — sets status to SUBMITTED (success) or VALIDATION_ERROR (400/401)
📋 Produces Audit Record — to audits Kafka topic for every delivery attempt
🔁 Retry on failure — exponential backoff + 15% jitter, up to 9 attempts (30min intervals)
HTTP POST submit/delete
8
EXTERNALreceives & processes

🏗 NxtPort CPu Platform

Port of Antwerp-Bruges. Receives container release "submit" or "delete" notification. Processes the release, validates against port rules. Acknowledges receipt with publicReferenceId and externalReferenceId. Later sends async webhook callback with final status.

↩ sends webhook callback to
9
SERVICE 4receives webhook → classifies → routes

🔗 external-nxtport-integration-api

NxtPort calls our POST /notifications webhook with the outcome. The service classifies and routes accordingly:

POSITIVE — marks record COMPLETED in PostgreSQL. Release successful!
NEGATIVE — marks CALLBACK_VALIDATION_ERROR, sends alert email via SendGrid, creates GCSS error task
🛃 CUSTOMS_RELEASE — creates GCSS task: "COREOR - NOTIFY - FAVV - {equipment}"
🔍 CUSTOMS_SCAN — creates GCSS task: "COREOR - NOTIFY - Scanning - {equipment}"
NONE — acknowledged, no further action
⚠ on delivery failure
10
SERVICES 5 & 6persist → schedule → re-send

🔁 Retry Consumer & Status Scheduler

Retry Consumer picks up PushNotificationRetryRecord from Kafka, persists to notification_retries table. Status Scheduler runs cron jobs, queries DB for due retries, re-publishes to sender topic. Cycles back to Step 7 — up to 9 attempts with exponential backoff. Permanent failures trigger SendGrid emails + GCSS tasks.

↻ loops back to Notification Sender (Step 7)

Six services, one mission

Click any service to explore its classes, responsibilities, and how it connects to the broader system.

external-coreor-engine

Java 25 · Spring Boot 4.0.3 · Confluent Avro 8.1

The entry point. Consumes GCSS EBDD events, filters for ACR_Open/ACR_Closed at Belgian terminals, extracts 30+ variables from deeply nested TPDoc Avro structures, and maps to clean EquipmentReleaseEvent payloads.

▸ Show key classes
  • NotificationEventListener — Kafka consumer, event type filter, NxtPort pre-check
  • PayloadVariableExtractor — Complex TPDoc parsing, Belgian terminal validation, multi-country logic
  • CoreorMapper — TransportDocument → EquipmentReleaseEvent with RKST→APCS terminal resolution
  • EventDelegator — Kafka producer to dedicated cluster
  • TerminalCodeResolver — Immutable Belgian terminal code mapping (9 terminals)
  • LogMasker — NxtPort credential masking in logs
🗄

nxtport-database-engine

Java 25 · Spring Boot 4.0.3 · PostgreSQL · Spring Data JDBC

The brain. Applies complex business rules for new-closed, re-closed, new-open, and re-open scenarios. Persists SUBMIT/DELETE records with dependency tracking. Enriches events with vessel reference data.

▸ Show key classes
  • EventListener — Kafka consumer, routes ACR_Closed vs ACR_Open
  • AcrClosedService — New-closed vs re-closed with dependency chains
  • AcrOpenService — Re-open DELETE processing with SUBMIT dependency
  • ReleaseEventService — Core DB operations, status management
  • VesselsService — Vessel IMO enrichment with Caffeine cache
  • EmailService — SendGrid failure notifications (async)
  • GcssApiService — GCSS task creation for error scenarios
📡

external-nxtport-notification-sender

Java 21 · Spring Boot 3.5.8 · WebFlux · R2DBC

The delivery engine. Reactive service that resolves per-subscriber OAuth2 tokens, generates HMAC signatures, POSTs to NxtPort callback URLs, and manages sophisticated retry logic with exponential backoff + jitter.

▸ Show key classes
  • NotificationSender — Central delivery with OAuth2 + HMAC signing
  • RetryService — Exponential backoff (up to 9 attempts, 30min intervals)
  • RemoteOAuth2AccessTokenService — Per-subscription token caching
  • AuditService — Kafka audit trail for all delivery attempts
  • DataAccessService — R2DBC reactive DB status updates
  • GcssApiService — GCSS task creation on permanent failure
🔗

external-nxtport-integration-api

Spring Boot · REST API · PostgreSQL · Spring Data JDBC

The callback receiver. Exposes POST /notifications webhook for NxtPort to report status. Classifies into 5 scope types: positive, negative, customs-release, customs-scan, or none.

▸ Show key classes
  • NxtPortIntegrationApiController — POST /notifications endpoint
  • NxtportRequestService — Callback vs customs event routing
  • NxtPortRequestTypeValidator — 5-scope classification logic
  • DataAccessService — DB lookups and status updates
  • GcssApiService — GCSS task creation for customs events
  • SendGridEmailService — Rejection alert emails
🔁

external-nxtport-retry-consumer

Java 21 · Spring Boot 3.5.8 · JPA · PostgreSQL

The retry bridge. Consumes PushNotificationRetryRecord events from Kafka and persists to the notification_retries table. Provides durable storage for the scheduler to pick up and re-process failed deliveries.

▸ Show key classes
  • RetryListener — Kafka consumer for retry records
  • RetryConsumerService — Maps Avro → JPA entity, persists to DB
  • AbstractKafkaListener — Base class with custom offset management
  • KafkaRetryConfig — Infinite Kafka retries, limited app retries

external-nxtport-status-scheduler

Spring Boot · Scheduled Jobs · PostgreSQL

The scheduler. Runs periodic jobs to pick up due retry records from the notification_retries table and resubmits them through the notification pipeline. Ensures no failed delivery is permanently lost.

▸ Show key classes
  • Scheduled cron jobs for retry processing
  • Queries notification_retries for due retries
  • Re-publishes to sender Kafka topic
  • Manages retry count limits and dead-letter

Step through the processing stages

Click any stage to see exactly what happens at each step in the event processing journey.

1
EBDD Event
GCSS publishes
2
Filter
ACR type check
3
Pre-check
Belgian terminal
4
Extract
Parse TPDoc
5
Persist
PostgreSQL
6
Deliver
NxtPort API
7
Callback
Webhook response

GCSS EBDD Event Emission

GCSS Activity Plan detects an ACR (Arrange Cargo Release) task state change — open, closed, re-open, or re-close. An EBDD event is emitted containing the full Transport Document (TPDoc) as a deeply nested Avro record. The event is published to the shared Kafka cluster topic with a KeySchema containing the segmentID. This is the origin of every container release notification in the system.


ACR task lifecycle & COREOR operations

Understanding when ACR tasks open, close, re-open, and re-close — and what COREOR submit/delete operations are triggered.

Maersk → NxtPort Outbound Workflow
Open
ACR Open
No NxtPort action
Expect ACR Closed
Close
ACR Closed
COREOR Engine
DB: SUBMIT record
NxtPort: "submit"
Re-open
ACR Re-Open
Unknown change type
DB: DELETE record
NxtPort: "delete"
Re-close
ACR Re-Closed
Checks last state
DB: dummy DELETE + new SUBMIT
NxtPort: "delete" then "submit"

Re-open = Always Delete

When ACR re-opens, the API cannot know if the change is critical (vessel, port) or non-critical (depot). So a COREOR "delete" is always triggered as a safety measure.

Open → Closed Pairing

ACR open always expects a subsequent closed. Re-open always expects a subsequent re-closed. This lifecycle drives the dependency chain in the database.

Non-critical = No Action

In GCSS, non-critical amendments (e.g. empty container return depot) don't require ACR re-open. These changes aren't relevant to NxtPort and skip the system entirely.


Kafka topics & data flow

Five Kafka topics across shared and dedicated clusters, all secured with SASL_SSL and SCRAM-SHA-512 authentication.

IN
MSK.booking.ebddTransportDocument.topic.internal.any.v2
GCSS → COREOR Engine
msk.shipment.coreordatabaseengine.topic.internal.any.v1
COREOR → DB Engine
msk.shipment.coreorengine.topic.internal.any.v1
DB Engine → Sender
msk.external-pushapi.audits.topic.internal.any.v1
Sender → Audit Trail
msk.shipment.coreorsenderretry.topic.internal.any.v1
Sender → Retry Consumer

PostgreSQL persistence layer

Two core tables track the complete lifecycle of every container release event.

nxtport_coreor

external_reference_idVARCHAR PKUUID-based unique event identifier
public_reference_idUUIDNxtPort response reference
action_typeVARCHARSUBMIT or DELETE
dependent_event_idVARCHARReference to dependency chain
release_identificationVARCHARtpDoc + equipment number
eventJSONBCoreorNotificationData payload
event_statusVARCHARLifecycle status tracking
carrier_codeVARCHARMAEU or MAEI

notification_retries

retry_idBIGINT PKAuto-generated identity
payloadVARCHAROriginal notification payload
callback_urlVARCHARNxtPort subscriber endpoint
retry_countINTEGERCurrent attempt number (max 9)
next_tryVARCHARScheduled retry timestamp
authenticationJSONBOAuth2 + HMAC credentials

Event Status Lifecycle

NEWSENDINGCOMPLETED
NEWSENDINGFAILEDRETRYCOMPLETED
NEWWAITINGSENDING(dependency resolves)

Why this matters

This isn't just a technical migration — it's mandatory digital infrastructure that impacts every container moving through Belgium's largest port.

System Complexity at a Glance

Technical Complexity

  • 6 distributed microservices with independent lifecycles
  • Dual Kafka clusters (shared + dedicated) with Avro schemas
  • Complex dependency chain tracking across SUBMIT/DELETE records
  • Multi-country port matching with fallback scanning logic
  • Per-subscriber OAuth2 + HMAC authentication
  • Exponential backoff retry with jitter (up to 9 attempts)
  • GCSS task creation for error scenarios requiring manual intervention
  • Full audit trail on every notification attempt

Business Impact

  • Mandatory compliance under Port Police Regulations
  • Covers 9 terminals across Antwerp and Zeebrugge
  • Real-time container release vs legacy batch EDI
  • Supports critical + non-critical amendment handling
  • Customs integration (FAVV / scanning notifications)
  • Multi-carrier support (MAEU, MAEI)
  • Automated error handling replaces manual reconciliation
  • Zero-downtime evolution with Avro schema compatibility
🏗

Built by

Original EDI/COREOR solution by Oguz Aktan and Francisco Javier Moral, with GCSS backend. API development by Koushik Mondal for CPu integration replacing COREOR for NxtPort.

🔒

Security First

SASL_SSL everywhere. OAuth2 per subscriber. HMAC message signing. Vault-based secrets. Log masking for credentials. Non-root container execution.

👁

Full Observability

OpenTelemetry distributed tracing across all 6 services. Prometheus metrics. Structured logging with Logstash. Grafana dashboards. Complete audit Kafka topic.

Never Lose an Event

Multi-layer retry: immediate (3x), scheduled (9x with backoff), DB persistence, status scheduler. SendGrid alerts + GCSS tasks on permanent failure.