Compare commits
25 Commits
v0.1.2
...
add_mix_co
| Author | SHA1 | Date | |
|---|---|---|---|
| 3f45052193 | |||
| 7dc7ab67e4 | |||
| e7c5e5f77f | |||
| 4e32a958ea | |||
| a260def38d | |||
| 782a935d3d | |||
| 3fbdabc874 | |||
| 7386f8ed0b | |||
| 51e494c48b | |||
| 9ea9d55eee | |||
| 8c106464fd | |||
| 7433c147c9 | |||
| 9c4a9ea1e5 | |||
| 82804c6803 | |||
| 483caab54c | |||
| a9821b1ae6 | |||
| 0744642985 | |||
| 1d5c6f3348 | |||
| ad87934abf | |||
| 6b49fa68c0 | |||
| f0df169689 | |||
| d9fd7a61bb | |||
| 897f717da5 | |||
| 51e1a065ad | |||
| e7f50e899d |
@@ -12,3 +12,4 @@ Role: Principal Systems Architect & Lead Software Engineer.Objective: Implement
|
||||
|
||||
|
||||
|
||||
Create a walkthrough for Julia service-A service sending a mix-content chat message to Julia service-B. the chat message must includes
|
||||
@@ -1,8 +1,8 @@
|
||||
# This file is machine-generated - editing it directly is not advised
|
||||
|
||||
julia_version = "1.12.4"
|
||||
julia_version = "1.12.5"
|
||||
manifest_format = "2.0"
|
||||
project_hash = "be1e3c2d8b7f4f0ee7375c94aaf704ce73ba57b9"
|
||||
project_hash = "8a7a8b88d777403234a6816e699fb0ab1e991aac"
|
||||
|
||||
[[deps.AliasTables]]
|
||||
deps = ["PtrArrays", "Random"]
|
||||
|
||||
@@ -1,194 +0,0 @@
|
||||
### API
|
||||
Plik server expose a REST-full API to manage uploads and get files :
|
||||
|
||||
Get and create upload :
|
||||
|
||||
- **POST** /upload
|
||||
- Params (json object in request body) :
|
||||
- oneshot (bool)
|
||||
- stream (bool)
|
||||
- removable (bool)
|
||||
- ttl (int)
|
||||
- login (string)
|
||||
- password (string)
|
||||
- files (see below)
|
||||
- Return :
|
||||
JSON formatted upload object.
|
||||
Important fields :
|
||||
- id (required to upload files)
|
||||
- uploadToken (required to upload/remove files)
|
||||
- files (see below)
|
||||
|
||||
For stream mode you need to know the file id before the upload starts as it will block.
|
||||
File size and/or file type also need to be known before the upload starts as they have to be printed
|
||||
in HTTP response headers.
|
||||
To get the file ids pass a "files" json object with each file you are about to upload.
|
||||
Fill the reference field with an arbitrary string to avoid matching file ids using the fileName field.
|
||||
This is also used to notify of MISSING files when file upload is not yet finished or has failed.
|
||||
```
|
||||
"files" : [
|
||||
{
|
||||
"fileName": "file.txt",
|
||||
"fileSize": 12345,
|
||||
"fileType": "text/plain",
|
||||
"reference": "0"
|
||||
},...
|
||||
]
|
||||
```
|
||||
|
||||
- **GET** /upload/:uploadid:
|
||||
- Get upload metadata (files list, upload date, ttl,...)
|
||||
|
||||
Upload file :
|
||||
|
||||
- **POST** /$mode/:uploadid:/:fileid:/:filename:
|
||||
- Request body must be a multipart request with a part named "file" containing file data.
|
||||
|
||||
- **POST** /file/:uploadid:
|
||||
- Same as above without passing file id, won't work for stream mode.
|
||||
|
||||
- **POST** /:
|
||||
- Quick mode, automatically create an upload with default parameters and add the file to it.
|
||||
|
||||
Get file :
|
||||
|
||||
- **HEAD** /$mode/:uploadid:/:fileid:/:filename:
|
||||
- Returns only HTTP headers. Useful to know Content-Type and Content-Length without downloading the file. Especially if upload has OneShot option enabled.
|
||||
|
||||
- **GET** /$mode/:uploadid:/:fileid:/:filename:
|
||||
- Download file. Filename **MUST** match. A browser, might try to display the file if it's a jpeg for example. You may try to force download with ?dl=1 in url.
|
||||
|
||||
- **GET** /archive/:uploadid:/:filename:
|
||||
- Download uploaded files in a zip archive. :filename: must end with .zip
|
||||
|
||||
Remove file :
|
||||
|
||||
- **DELETE** /$mode/:uploadid:/:fileid:/:filename:
|
||||
- Delete file. Upload **MUST** have "removable" option enabled.
|
||||
|
||||
Show server details :
|
||||
|
||||
- **GET** /version
|
||||
- Show plik server version, and some build information (build host, date, git revision,...)
|
||||
|
||||
- **GET** /config
|
||||
- Show plik server configuration (ttl values, max file size, ...)
|
||||
|
||||
- **GET** /stats
|
||||
- Get server statistics ( upload/file count, user count, total size used )
|
||||
- Admin only
|
||||
|
||||
User authentication :
|
||||
|
||||
-
|
||||
Plik can authenticate users using Google and/or OVH third-party API.
|
||||
The /auth API is designed for the Plik web application nevertheless if you want to automatize it be sure to provide a valid
|
||||
Referrer HTTP header and forward all session cookies.
|
||||
Plik session cookies have the "secure" flag set, so they can only be transmitted over secure HTTPS connections.
|
||||
To avoid CSRF attacks the value of the plik-xsrf cookie MUST be copied in the X-XSRFToken HTTP header of each
|
||||
authenticated request.
|
||||
Once authenticated a user can generate upload tokens. Those tokens can be used in the X-PlikToken HTTP header used to link
|
||||
an upload to the user account. It can be put in the ~/.plikrc file of the Plik command line client.
|
||||
|
||||
- **Local** :
|
||||
- You'll need to create users using the server command line
|
||||
|
||||
- **Google** :
|
||||
- You'll need to create a new application in the [Google Developper Console](https://console.developers.google.com)
|
||||
- You'll be handed a Google API ClientID and a Google API ClientSecret that you'll need to put in the plikd.cfg file
|
||||
- Do not forget to whitelist valid origin and redirect url ( https://yourdomain/auth/google/callback ) for your domain
|
||||
|
||||
- **OVH** :
|
||||
- You'll need to create a new application in the OVH API : https://eu.api.ovh.com/createApp/
|
||||
- You'll be handed an OVH application key and an OVH application secret key that you'll need to put in the plikd.cfg file
|
||||
|
||||
- **GET** /auth/google/login
|
||||
- Get Google user consent URL. User have to visit this URL to authenticate
|
||||
|
||||
- **GET** /auth/google/callback
|
||||
- Callback of the user consent dialog
|
||||
- The user will be redirected back to the web application with a Plik session cookie at the end of this call
|
||||
|
||||
- **GET** /auth/ovh/login
|
||||
- Get OVH user consent URL. User have to visit this URL to authenticate
|
||||
- The response will contain a temporary session cookie to forward the API endpoint and OVH consumer key to the callback
|
||||
|
||||
- **GET** /auth/ovh/callback
|
||||
- Callback of the user consent dialog.
|
||||
- The user will be redirected back to the web application with a Plik session cookie at the end of this call
|
||||
|
||||
- **POST** /auth/local/login
|
||||
- Params :
|
||||
- login : user login
|
||||
- password : user password
|
||||
|
||||
- **GET** /auth/logout
|
||||
- Invalidate Plik session cookies
|
||||
|
||||
- **GET** /me
|
||||
- Return basic user info ( ID, name, email ) and tokens
|
||||
|
||||
- **DELETE** /me
|
||||
- Remove user account.
|
||||
|
||||
- **GET** /me/token
|
||||
- List user tokens
|
||||
- This call use pagination
|
||||
|
||||
- **POST** /me/token
|
||||
- Create a new upload token
|
||||
- A comment can be passed in the json body
|
||||
|
||||
- **DELETE** /me/token/{token}
|
||||
- Revoke an upload token
|
||||
|
||||
- **GET** /me/uploads
|
||||
- List user uploads
|
||||
- Params :
|
||||
- token : filter by token
|
||||
- This call use pagination
|
||||
|
||||
- **DELETE** /me/uploads
|
||||
- Remove all uploads linked to a user account
|
||||
- Params :
|
||||
- token : filter by token
|
||||
|
||||
- **GET** /me/stats
|
||||
- Get user statistics ( upload/file count, total size used )
|
||||
|
||||
- **GET** /users
|
||||
- List all users
|
||||
- This call use pagination
|
||||
- Admin only
|
||||
|
||||
QRCode :
|
||||
|
||||
- **GET** /qrcode
|
||||
- Generate a QRCode image from an url
|
||||
- Params :
|
||||
- url : The url you want to store in the QRCode
|
||||
- size : The size of the generated image in pixels (default: 250, max: 1000)
|
||||
|
||||
|
||||
$mode can be "file" or "stream" depending if stream mode is enabled. See FAQ for more details.
|
||||
|
||||
Examples :
|
||||
```sh
|
||||
Create an upload (in the json response, you'll have upload id and upload token)
|
||||
$ curl -X POST http://127.0.0.1:8080/upload
|
||||
|
||||
Create a OneShot upload
|
||||
$ curl -X POST -d '{ "OneShot" : true }' http://127.0.0.1:8080/upload
|
||||
|
||||
Upload a file to upload
|
||||
$ curl -X POST --header "X-UploadToken: M9PJftiApG1Kqr81gN3Fq1HJItPENMhl" -F "file=@test.txt" http://127.0.0.1:8080/file/IsrIPIsDskFpN12E
|
||||
|
||||
Get headers
|
||||
$ curl -I http://127.0.0.1:8080/file/IsrIPIsDskFpN12E/sFjIeokH23M35tN4/test.txt
|
||||
HTTP/1.1 200 OK
|
||||
Content-Disposition: filename=test.txt
|
||||
Content-Length: 3486
|
||||
Content-Type: text/plain; charset=utf-8
|
||||
Date: Fri, 15 May 2015 09:16:20 GMT
|
||||
|
||||
```
|
||||
@@ -1,8 +1,11 @@
|
||||
[deps]
|
||||
Arrow = "69666777-d1a9-59fb-9406-91d4454c9d45"
|
||||
DataFrames = "a93c6f00-e57d-5684-b7b6-d8193f3e46c0"
|
||||
Dates = "ade2ca70-3891-5945-98fb-dc099432e06a"
|
||||
GeneralUtils = "c6c72f09-b708-4ac8-ac7c-2084d70108fe"
|
||||
HTTP = "cd3eb016-35fb-5094-929b-558a96fad6f3"
|
||||
JSON = "682c06a0-de6a-54ab-a142-c8b1cf79cde6"
|
||||
NATS = "55e73f9c-eeeb-467f-b4cc-a633fde63d2a"
|
||||
PrettyPrinting = "54e16d92-306c-5ea0-a30b-337be88ac337"
|
||||
Revise = "295af30f-e4ad-537b-8983-00126c2a3abe"
|
||||
UUIDs = "cf7118a7-6976-5b1a-9a39-7adc72f591a4"
|
||||
|
||||
@@ -1,321 +0,0 @@
|
||||
# Implementation Guide: Bi-Directional Data Bridge
|
||||
|
||||
## Overview
|
||||
|
||||
This document describes the implementation of the high-performance, bi-directional data bridge between Julia and JavaScript services using NATS (Core & JetStream), implementing the Claim-Check pattern for large payloads.
|
||||
|
||||
## Architecture
|
||||
|
||||
The implementation follows the Claim-Check pattern:
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────────┐
|
||||
│ SmartSend Function │
|
||||
└─────────────────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────────────────┐
|
||||
│ Is payload size < 1MB? │
|
||||
└─────────────────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
┌─────────────────┴─────────────────┐
|
||||
▼ ▼
|
||||
┌─────────────────┐ ┌─────────────────┐
|
||||
│ Direct Path │ │ Link Path │
|
||||
│ (< 1MB) │ │ (> 1MB) │
|
||||
│ │ │ │
|
||||
│ • Serialize to │ │ • Serialize to │
|
||||
│ IOBuffer │ │ IOBuffer │
|
||||
│ • Base64 encode │ │ • Upload to │
|
||||
│ • Publish to │ │ HTTP Server │
|
||||
│ NATS │ │ • Publish to │
|
||||
│ │ │ NATS with URL │
|
||||
└─────────────────┘ └─────────────────┘
|
||||
```
|
||||
|
||||
## Files
|
||||
|
||||
### Julia Module: [`src/julia_bridge.jl`](../src/julia_bridge.jl)
|
||||
|
||||
The Julia implementation provides:
|
||||
|
||||
- **[`MessageEnvelope`](../src/julia_bridge.jl)**: Struct for the unified JSON envelope
|
||||
- **[`SmartSend()`](../src/julia_bridge.jl)**: Handles transport selection based on payload size
|
||||
- **[`SmartReceive()`](../src/julia_bridge.jl)**: Handles both direct and link transport
|
||||
|
||||
### JavaScript Module: [`src/js_bridge.js`](../src/js_bridge.js)
|
||||
|
||||
The JavaScript implementation provides:
|
||||
|
||||
- **`MessageEnvelope` class**: For the unified JSON envelope
|
||||
- **[`SmartSend()`](../src/js_bridge.js)**: Handles transport selection based on payload size
|
||||
- **[`SmartReceive()`](../src/js_bridge.js)**: Handles both direct and link transport
|
||||
|
||||
## Installation
|
||||
|
||||
### Julia Dependencies
|
||||
|
||||
```julia
|
||||
using Pkg
|
||||
Pkg.add("NATS")
|
||||
Pkg.add("Arrow")
|
||||
Pkg.add("JSON3")
|
||||
Pkg.add("HTTP")
|
||||
Pkg.add("UUIDs")
|
||||
Pkg.add("Dates")
|
||||
```
|
||||
|
||||
### JavaScript Dependencies
|
||||
|
||||
```bash
|
||||
npm install nats.js apache-arrow uuid base64-url
|
||||
```
|
||||
|
||||
## Usage Tutorial
|
||||
|
||||
### Step 1: Start NATS Server
|
||||
|
||||
```bash
|
||||
docker run -p 4222:4222 nats:latest
|
||||
```
|
||||
|
||||
### Step 2: Start HTTP File Server (optional)
|
||||
|
||||
```bash
|
||||
# Create a directory for file uploads
|
||||
mkdir -p /tmp/fileserver
|
||||
|
||||
# Use any HTTP server that supports POST for file uploads
|
||||
# Example: Python's built-in server
|
||||
python3 -m http.server 8080 --directory /tmp/fileserver
|
||||
```
|
||||
|
||||
### Step 3: Run Test Scenarios
|
||||
|
||||
```bash
|
||||
# Scenario 1: Command & Control (JavaScript sender)
|
||||
node test/scenario1_command_control.js
|
||||
|
||||
# Scenario 2: Large Arrow Table (JavaScript sender)
|
||||
node test/scenario2_large_table.js
|
||||
|
||||
# Scenario 3: Julia-to-Julia communication
|
||||
# Run both Julia and JavaScript versions
|
||||
julia test/scenario3_julia_to_julia.jl
|
||||
node test/scenario3_julia_to_julia.js
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
### Scenario 1: Command & Control (Small JSON)
|
||||
|
||||
#### JavaScript (Sender)
|
||||
```javascript
|
||||
const { SmartSend } = require('./js_bridge');
|
||||
|
||||
const config = {
|
||||
step_size: 0.01,
|
||||
iterations: 1000
|
||||
};
|
||||
|
||||
await SmartSend("control", config, "json", {
|
||||
correlationId: "unique-id"
|
||||
});
|
||||
```
|
||||
|
||||
#### Julia (Receiver)
|
||||
```julia
|
||||
using NATS
|
||||
using JSON3
|
||||
|
||||
# Subscribe to control subject
|
||||
subscribe(nats, "control") do msg
|
||||
env = MessageEnvelope(String(msg.data))
|
||||
config = JSON3.read(env.payload)
|
||||
|
||||
# Execute simulation with parameters
|
||||
step_size = config.step_size
|
||||
iterations = config.iterations
|
||||
|
||||
# Send acknowledgment
|
||||
response = Dict("status" => "Running", "correlation_id" => env.correlation_id)
|
||||
publish(nats, "control_response", JSON3.stringify(response))
|
||||
end
|
||||
```
|
||||
|
||||
### Scenario 2: Deep Dive Analysis (Large Arrow Table)
|
||||
|
||||
#### Julia (Sender)
|
||||
```julia
|
||||
using Arrow
|
||||
using DataFrames
|
||||
|
||||
# Create large DataFrame
|
||||
df = DataFrame(
|
||||
id = 1:10_000_000,
|
||||
value = rand(10_000_000),
|
||||
category = rand(["A", "B", "C"], 10_000_000)
|
||||
)
|
||||
|
||||
# Send via SmartSend with type="table"
|
||||
await SmartSend("analysis_results", df, "table");
|
||||
```
|
||||
|
||||
#### JavaScript (Receiver)
|
||||
```javascript
|
||||
const { SmartReceive } = require('./js_bridge');
|
||||
|
||||
const result = await SmartReceive(msg);
|
||||
|
||||
// Use table data for visualization with Perspective.js or D3
|
||||
const table = result.data;
|
||||
```
|
||||
|
||||
### Scenario 3: Live Binary Processing
|
||||
|
||||
#### JavaScript (Sender)
|
||||
```javascript
|
||||
const { SmartSend } = require('./js_bridge');
|
||||
|
||||
// Capture binary chunk
|
||||
const binaryData = await navigator.mediaDevices.getUserMedia({ binary: true });
|
||||
|
||||
await SmartSend("binary_input", binaryData, "binary", {
|
||||
metadata: {
|
||||
sample_rate: 44100,
|
||||
channels: 1
|
||||
}
|
||||
});
|
||||
```
|
||||
|
||||
#### Julia (Receiver)
|
||||
```julia
|
||||
using WAV
|
||||
using DSP
|
||||
|
||||
# Receive binary data
|
||||
function process_binary(data)
|
||||
# Perform FFT or AI transcription
|
||||
spectrum = fft(data)
|
||||
|
||||
# Send results back (JSON + Arrow table)
|
||||
results = Dict("transcription" => "sample text", "spectrum" => spectrum)
|
||||
await SmartSend("binary_output", results, "json")
|
||||
end
|
||||
```
|
||||
|
||||
### Scenario 4: Catch-Up (JetStream)
|
||||
|
||||
#### Julia (Producer)
|
||||
```julia
|
||||
using NATS
|
||||
|
||||
function publish_health_status(nats)
|
||||
jetstream = JetStream(nats, "health_updates")
|
||||
|
||||
while true
|
||||
status = Dict("cpu" => rand(), "memory" => rand())
|
||||
publish(jetstream, "health", status)
|
||||
sleep(5) # Every 5 seconds
|
||||
end
|
||||
end
|
||||
```
|
||||
|
||||
#### JavaScript (Consumer)
|
||||
```javascript
|
||||
const { connect } = require('nats');
|
||||
|
||||
const nc = await connect({ servers: ['nats://localhost:4222'] });
|
||||
const js = nc.jetstream();
|
||||
|
||||
// Request replay from last 10 minutes
|
||||
const consumer = await js.pullSubscribe("health", {
|
||||
durable_name: "catchup",
|
||||
max_batch: 100,
|
||||
max_ack_wait: 30000
|
||||
});
|
||||
|
||||
// Process historical and real-time messages
|
||||
for await (const msg of consumer) {
|
||||
const result = await SmartReceive(msg);
|
||||
// Process the data
|
||||
msg.ack();
|
||||
}
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
### Environment Variables
|
||||
|
||||
| Variable | Default | Description |
|
||||
|----------|---------|-------------|
|
||||
| `NATS_URL` | `nats://localhost:4222` | NATS server URL |
|
||||
| `FILESERVER_URL` | `http://localhost:8080/upload` | HTTP file server URL |
|
||||
| `SIZE_THRESHOLD` | `1_000_000` | Size threshold in bytes (1MB) |
|
||||
|
||||
### Message Envelope Schema
|
||||
|
||||
```json
|
||||
{
|
||||
"correlation_id": "uuid-v4-string",
|
||||
"type": "json|table|binary",
|
||||
"transport": "direct|link",
|
||||
"payload": "base64-encoded-string", // Only if transport=direct
|
||||
"url": "http://fileserver/path/to/data", // Only if transport=link
|
||||
"metadata": {
|
||||
"content_type": "application/octet-stream",
|
||||
"content_length": 123456,
|
||||
"format": "arrow_ipc_stream"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Performance Considerations
|
||||
|
||||
### Zero-Copy Reading
|
||||
- Use Arrow's memory-mapped file reading
|
||||
- Avoid unnecessary data copying during deserialization
|
||||
- Use Apache Arrow's native IPC reader
|
||||
|
||||
### Exponential Backoff
|
||||
- Maximum retry count: 5
|
||||
- Base delay: 100ms, max delay: 5000ms
|
||||
- Implemented in both Julia and JavaScript implementations
|
||||
|
||||
### Correlation ID Logging
|
||||
- Log correlation_id at every stage
|
||||
- Include: send, receive, serialize, deserialize
|
||||
- Use structured logging format
|
||||
|
||||
## Testing
|
||||
|
||||
Run the test scripts:
|
||||
|
||||
```bash
|
||||
# Scenario 1: Command & Control (JavaScript sender)
|
||||
node test/scenario1_command_control.js
|
||||
|
||||
# Scenario 2: Large Arrow Table (JavaScript sender)
|
||||
node test/scenario2_large_table.js
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
1. **NATS Connection Failed**
|
||||
- Ensure NATS server is running
|
||||
- Check NATS_URL configuration
|
||||
|
||||
2. **HTTP Upload Failed**
|
||||
- Ensure file server is running
|
||||
- Check FILESERVER_URL configuration
|
||||
- Verify upload permissions
|
||||
|
||||
3. **Arrow IPC Deserialization Error**
|
||||
- Ensure data is properly serialized to Arrow format
|
||||
- Check Arrow version compatibility
|
||||
|
||||
## License
|
||||
|
||||
MIT
|
||||
@@ -4,6 +4,88 @@
|
||||
|
||||
This document describes the architecture for a high-performance, bi-directional data bridge between a Julia service and a JavaScript (Node.js) service using NATS (Core & JetStream), implementing the Claim-Check pattern for large payloads.
|
||||
|
||||
### File Server Handler Architecture
|
||||
|
||||
The system uses **handler functions** to abstract file server operations, allowing support for different file server implementations (e.g., Plik, AWS S3, custom HTTP server).
|
||||
|
||||
**Handler Function Signatures:**
|
||||
|
||||
```julia
|
||||
# Upload handler - uploads data to file server and returns URL
|
||||
# The handler is passed to smartsend as fileserverUploadHandler parameter
|
||||
# It receives: (fileserver_url::String, dataname::String, data::Vector{UInt8})
|
||||
# Returns: Dict{String, Any} with keys: "status", "uploadid", "fileid", "url"
|
||||
fileserverUploadHandler(fileserver_url::String, dataname::String, data::Vector{UInt8})::Dict{String, Any}
|
||||
|
||||
# Download handler - fetches data from file server URL with exponential backoff
|
||||
# The handler is passed to smartreceive as fileserverDownloadHandler parameter
|
||||
# It receives: (url::String, max_retries::Int, base_delay::Int, max_delay::Int, correlation_id::String)
|
||||
# Returns: Vector{UInt8} (the downloaded data)
|
||||
fileserverDownloadHandler(url::String, max_retries::Int, base_delay::Int, max_delay::Int, correlation_id::String)::Vector{UInt8}
|
||||
```
|
||||
|
||||
This design allows the system to support multiple file server backends without changing the core messaging logic.
|
||||
|
||||
### Multi-Payload Support (Standard API)
|
||||
|
||||
The system uses a **standardized list-of-tuples format** for all payload operations. **Even when sending a single payload, the user must wrap it in a list.**
|
||||
|
||||
**API Standard:**
|
||||
```julia
|
||||
# Input format for smartsend (always a list of tuples with type info)
|
||||
[(dataname1, data1, type1), (dataname2, data2, type2), ...]
|
||||
|
||||
# Output format for smartreceive (always returns a list of tuples)
|
||||
[(dataname1, data1, type1), (dataname2, data2, type2), ...]
|
||||
```
|
||||
|
||||
**Supported Types:**
|
||||
- `"text"` - Plain text
|
||||
- `"dictionary"` - JSON-serializable dictionaries (Dict, NamedTuple)
|
||||
- `"table"` - Tabular data (DataFrame, array of structs)
|
||||
- `"image"` - Image data (Bitmap, PNG/JPG bytes)
|
||||
- `"audio"` - Audio data (WAV, MP3 bytes)
|
||||
- `"video"` - Video data (MP4, AVI bytes)
|
||||
- `"binary"` - Generic binary data (Vector{UInt8})
|
||||
|
||||
This design allows per-payload type specification, enabling **mixed-content messages** where different payloads can use different serialization formats in a single message.
|
||||
|
||||
**Examples:**
|
||||
|
||||
```julia
|
||||
# Single payload - still wrapped in a list
|
||||
smartsend(
|
||||
"/test",
|
||||
[("dataname1", data1, "dictionary")], # List with one tuple (data, type)
|
||||
nats_url="nats://localhost:4222",
|
||||
fileserverUploadHandler=plik_oneshot_upload,
|
||||
metadata=user_provided_envelope_level_metadata
|
||||
)
|
||||
|
||||
# Multiple payloads in one message with different types
|
||||
smartsend(
|
||||
"/test",
|
||||
[("dataname1", data1, "dictionary"), ("dataname2", data2, "table")],
|
||||
nats_url="nats://localhost:4222",
|
||||
fileserverUploadHandler=plik_oneshot_upload
|
||||
)
|
||||
|
||||
# Mixed content (e.g., chat with text, image, audio)
|
||||
smartsend(
|
||||
"/chat",
|
||||
[
|
||||
("message_text", "Hello!", "text"),
|
||||
("user_image", image_data, "image"),
|
||||
("audio_clip", audio_data, "audio")
|
||||
],
|
||||
nats_url="nats://localhost:4222"
|
||||
)
|
||||
|
||||
# Receive always returns a list
|
||||
payloads = smartreceive(msg, fileserverDownloadHandler, max_retries, base_delay, max_delay)
|
||||
# payloads = [("dataname1", data1, type1), ("dataname2", data2, type2), ...]
|
||||
```
|
||||
|
||||
## Architecture Diagram
|
||||
|
||||
```mermaid
|
||||
@@ -34,38 +116,124 @@ flowchart TD
|
||||
|
||||
## System Components
|
||||
|
||||
### 1. Unified JSON Envelope Schema
|
||||
### 1. msgEnvelope_v1 - Message Envelope
|
||||
|
||||
All messages use a standardized envelope format:
|
||||
The `msgEnvelope_v1` structure provides a comprehensive message format for bidirectional communication between Julia and JavaScript services.
|
||||
|
||||
**Julia Structure:**
|
||||
```julia
|
||||
struct msgEnvelope_v1
|
||||
correlationId::String # Unique identifier to track messages across systems
|
||||
msgId::String # This message id
|
||||
timestamp::String # Message published timestamp
|
||||
|
||||
sendTo::String # Topic/subject the sender sends to
|
||||
msgPurpose::String # Purpose of this message (ACK | NACK | updateStatus | shutdown | ...)
|
||||
senderName::String # Sender name (e.g., "agent-wine-web-frontend")
|
||||
senderId::String # Sender id (uuid4)
|
||||
receiverName::String # Message receiver name (e.g., "agent-backend")
|
||||
receiverId::String # Message receiver id (uuid4 or nothing for broadcast)
|
||||
replyTo::String # Topic to reply to
|
||||
replyToMsgId::String # Message id this message is replying to
|
||||
brokerURL::String # NATS server address
|
||||
|
||||
metadata::Dict{String, Any}
|
||||
payloads::AbstractArray{msgPayload_v1} # Multiple payloads stored here
|
||||
end
|
||||
```
|
||||
|
||||
**JSON Schema:**
|
||||
```json
|
||||
{
|
||||
"correlation_id": "uuid-v4-string",
|
||||
"type": "json|table|binary",
|
||||
"transport": "direct|link",
|
||||
"payload": "base64-encoded-string", // Only if transport=direct
|
||||
"url": "http://fileserver/path/to/data", // Only if transport=link
|
||||
"correlationId": "uuid-v4-string",
|
||||
"msgId": "uuid-v4-string",
|
||||
"timestamp": "2024-01-15T10:30:00Z",
|
||||
|
||||
"sendTo": "topic/subject",
|
||||
"msgPurpose": "ACK | NACK | updateStatus | shutdown | chat",
|
||||
"senderName": "agent-wine-web-frontend",
|
||||
"senderId": "uuid4",
|
||||
"receiverName": "agent-backend",
|
||||
"receiverId": "uuid4",
|
||||
"replyTo": "topic",
|
||||
"replyToMsgId": "uuid4",
|
||||
"brokerURL": "nats://localhost:4222",
|
||||
|
||||
"metadata": {
|
||||
"content_type": "application/octet-stream",
|
||||
"content_length": 123456,
|
||||
"format": "arrow_ipc_stream"
|
||||
|
||||
},
|
||||
|
||||
"payloads": [
|
||||
{
|
||||
"id": "uuid4",
|
||||
"dataname": "login_image",
|
||||
"type": "image",
|
||||
"transport": "direct",
|
||||
"encoding": "base64",
|
||||
"size": 15433,
|
||||
"data": "base64-encoded-string",
|
||||
"metadata": {
|
||||
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "uuid4",
|
||||
"dataname": "large_data",
|
||||
"type": "table",
|
||||
"transport": "link",
|
||||
"encoding": "none",
|
||||
"size": 524288,
|
||||
"data": "http://localhost:8080/file/UPLOAD_ID/FILE_ID/data.arrow",
|
||||
"metadata": {
|
||||
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Transport Strategy Decision Logic
|
||||
### 2. msgPayload_v1 - Payload Structure
|
||||
|
||||
The `msgPayload_v1` structure provides flexible payload handling for various data types.
|
||||
|
||||
**Julia Structure:**
|
||||
```julia
|
||||
struct msgPayload_v1
|
||||
id::String # Id of this payload (e.g., "uuid4")
|
||||
dataname::String # Name of this payload (e.g., "login_image")
|
||||
type::String # "text | dictionary | table | image | audio | video | binary"
|
||||
transport::String # "direct | link"
|
||||
encoding::String # "none | json | base64 | arrow-ipc"
|
||||
size::Integer # Data size in bytes
|
||||
data::Any # Payload data in case of direct transport or a URL in case of link
|
||||
metadata::Dict{String, Any} # Dict("checksum" => "sha256_hash", ...)
|
||||
end
|
||||
```
|
||||
|
||||
**Key Features:**
|
||||
- Supports multiple data types: text, dictionary, table, image, audio, video, binary
|
||||
- Flexible transport: "direct" (NATS) or "link" (HTTP fileserver)
|
||||
- Multiple payloads per message (essential for chat with mixed content)
|
||||
- Per-payload and per-envelope metadata support
|
||||
|
||||
### 3. Transport Strategy Decision Logic
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ SmartSend Function │
|
||||
│ smartsend Function │
|
||||
│ Accepts: [(dataname1, data1, type1), ...] │
|
||||
│ (No standalone type parameter - type per payload) │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ Is payload size < 1MB? │
|
||||
│ For each payload: │
|
||||
│ 1. Extract type from tuple │
|
||||
│ 2. Serialize based on type │
|
||||
│ 3. Check payload size │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
┌─────────────────┴─────────────────┐
|
||||
┌────────────────┴─-────────────────┐
|
||||
▼ ▼
|
||||
┌─────────────────┐ ┌─────────────────┐
|
||||
│ Direct Path │ │ Link Path │
|
||||
@@ -76,23 +244,24 @@ All messages use a standardized envelope format:
|
||||
│ • Base64 encode │ │ • Upload to │
|
||||
│ • Publish to │ │ HTTP Server │
|
||||
│ NATS │ │ • Publish to │
|
||||
│ │ │ NATS with URL │
|
||||
│ (with payload │ │ NATS with URL │
|
||||
│ in envelope) │ │ (in envelope) │
|
||||
└─────────────────┘ └─────────────────┘
|
||||
```
|
||||
|
||||
### 3. Julia Module Architecture
|
||||
### 4. Julia Module Architecture
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
subgraph JuliaModule
|
||||
SmartSendJulia[SmartSend Julia]
|
||||
smartsendJulia[smartsend Julia]
|
||||
SizeCheck[Size Check]
|
||||
DirectPath[Direct Path]
|
||||
LinkPath[Link Path]
|
||||
HTTPClient[HTTP Client]
|
||||
end
|
||||
|
||||
SmartSendJulia --> SizeCheck
|
||||
smartsendJulia --> SizeCheck
|
||||
SizeCheck -->|< 1MB| DirectPath
|
||||
SizeCheck -->|>= 1MB| LinkPath
|
||||
LinkPath --> HTTPClient
|
||||
@@ -100,19 +269,19 @@ graph TD
|
||||
style JuliaModule fill:#c5e1a5
|
||||
```
|
||||
|
||||
### 4. JavaScript Module Architecture
|
||||
### 5. JavaScript Module Architecture
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
subgraph JSModule
|
||||
SmartSendJS[SmartSend JS]
|
||||
SmartReceiveJS[SmartReceive JS]
|
||||
smartsendJS[smartsend JS]
|
||||
smartreceiveJS[smartreceive JS]
|
||||
JetStreamConsumer[JetStream Pull Consumer]
|
||||
ApacheArrow[Apache Arrow]
|
||||
end
|
||||
|
||||
SmartSendJS --> NATS
|
||||
SmartReceiveJS --> JetStreamConsumer
|
||||
smartsendJS --> NATS
|
||||
smartreceiveJS --> JetStreamConsumer
|
||||
JetStreamConsumer --> ApacheArrow
|
||||
|
||||
style JSModule fill:#f3e5f5
|
||||
@@ -129,37 +298,66 @@ graph TD
|
||||
- `HTTP.jl` - HTTP client for file server
|
||||
- `Dates.jl` - Timestamps for logging
|
||||
|
||||
#### SmartSend Function
|
||||
#### smartsend Function
|
||||
|
||||
```julia
|
||||
function SmartSend(
|
||||
function smartsend(
|
||||
subject::String,
|
||||
data::Any,
|
||||
type::String = "json";
|
||||
data::AbstractArray{Tuple{String, Any, String}}; # No standalone type parameter
|
||||
nats_url::String = "nats://localhost:4222",
|
||||
fileserver_url::String = "http://localhost:8080/upload",
|
||||
fileserverUploadHandler::Function = plik_oneshot_upload,
|
||||
size_threshold::Int = 1_000_000 # 1MB
|
||||
)
|
||||
```
|
||||
|
||||
**Flow:**
|
||||
1. Serialize data to Arrow IPC stream (if table)
|
||||
2. Check payload size
|
||||
3. If < threshold: publish directly to NATS with Base64-encoded payload
|
||||
4. If >= threshold: upload to HTTP server, publish NATS with URL
|
||||
**Input Format:**
|
||||
- `data::AbstractArray{Tuple{String, Any, String}}` - **Must be a list of (dataname, data, type) tuples**: `[("dataname1", data1, "type1"), ("dataname2", data2, "type2"), ...]`
|
||||
- Even for single payloads: `[(dataname1, data1, "type1")]`
|
||||
- Each payload can have a different type, enabling mixed-content messages
|
||||
|
||||
#### SmartReceive Handler
|
||||
**Flow:**
|
||||
1. Iterate through the list of `(dataname, data, type)` tuples
|
||||
2. For each payload: extract the type from the tuple and serialize accordingly
|
||||
3. Check payload size
|
||||
4. If < threshold: publish directly to NATS with Base64-encoded payload
|
||||
5. If >= threshold: upload to HTTP server, publish NATS with URL
|
||||
|
||||
#### smartreceive Handler
|
||||
|
||||
```julia
|
||||
function SmartReceive(msg::NATS.Message)
|
||||
function smartreceive(
|
||||
msg::NATS.Message,
|
||||
fileserverDownloadHandler::Function;
|
||||
max_retries::Int = 5,
|
||||
base_delay::Int = 100,
|
||||
max_delay::Int = 5000
|
||||
)
|
||||
# Parse envelope
|
||||
# Check transport type
|
||||
# Iterate through all payloads
|
||||
# For each payload: check transport type
|
||||
# If direct: decode Base64 payload
|
||||
# If link: fetch from URL with exponential backoff
|
||||
# Deserialize Arrow IPC to DataFrame
|
||||
# If link: fetch from URL with exponential backoff using fileserverDownloadHandler
|
||||
# Deserialize payload based on type
|
||||
# Return list of (dataname, data, type) tuples
|
||||
end
|
||||
```
|
||||
|
||||
**Output Format:**
|
||||
- Always returns a list of tuples: `[(dataname1, data1, type1), (dataname2, data2, type2), ...]`
|
||||
- Even for single payloads: `[(dataname1, data1, type1)]`
|
||||
|
||||
**Process Flow:**
|
||||
1. Parse the JSON envelope to extract the `payloads` array
|
||||
2. Iterate through each payload in `payloads`
|
||||
3. For each payload:
|
||||
- Determine transport type (`direct` or `link`)
|
||||
- If `direct`: decode Base64 data from the message
|
||||
- If `link`: fetch data from URL using exponential backoff (via `fileserverDownloadHandler`)
|
||||
- Deserialize based on payload type (`dictionary`, `table`, `binary`, etc.)
|
||||
4. Return list of `(dataname, data, type)` tuples
|
||||
|
||||
**Note:** The `fileserverDownloadHandler` receives `(url::String, max_retries::Int, base_delay::Int, max_delay::Int, correlation_id::String)` and returns `Vector{UInt8}`.
|
||||
|
||||
### JavaScript Implementation
|
||||
|
||||
#### Dependencies
|
||||
@@ -167,34 +365,55 @@ end
|
||||
- `apache-arrow` - Arrow IPC serialization
|
||||
- `uuid` - Correlation ID generation
|
||||
|
||||
#### SmartSend Function
|
||||
#### smartsend Function
|
||||
|
||||
```javascript
|
||||
async function SmartSend(subject, data, type = 'json', options = {})
|
||||
async function smartsend(subject, data, options = {})
|
||||
// data format: [(dataname, data, type), ...]
|
||||
// options object should include:
|
||||
// - natsUrl: NATS server URL
|
||||
// - fileserverUrl: base URL of the file server
|
||||
// - sizeThreshold: threshold in bytes for transport selection
|
||||
// - correlationId: optional correlation ID for tracing
|
||||
```
|
||||
|
||||
**Flow:**
|
||||
1. Serialize data to Arrow IPC buffer (if table)
|
||||
2. Check payload size
|
||||
3. If < threshold: publish directly to NATS
|
||||
4. If >= threshold: upload to HTTP server, publish NATS with URL
|
||||
**Input Format:**
|
||||
- `data` - **Must be a list of (dataname, data, type) tuples**: `[(dataname1, data1, "type1"), (dataname2, data2, "type2"), ...]`
|
||||
- Even for single payloads: `[(dataname1, data1, "type1")]`
|
||||
- Each payload can have a different type, enabling mixed-content messages
|
||||
|
||||
#### SmartReceive Handler
|
||||
**Flow:**
|
||||
1. Iterate through the list of (dataname, data, type) tuples
|
||||
2. For each payload: extract the type from the tuple and serialize accordingly
|
||||
3. Check payload size
|
||||
4. If < threshold: publish directly to NATS
|
||||
5. If >= threshold: upload to HTTP server, publish NATS with URL
|
||||
|
||||
#### smartreceive Handler
|
||||
|
||||
```javascript
|
||||
async function SmartReceive(msg, options = {})
|
||||
async function smartreceive(msg, options = {})
|
||||
// options object should include:
|
||||
// - fileserverDownloadHandler: function to fetch data from file server URL
|
||||
// - max_retries: maximum retry attempts for fetching URL
|
||||
// - base_delay: initial delay for exponential backoff in ms
|
||||
// - max_delay: maximum delay for exponential backoff in ms
|
||||
// - correlationId: optional correlation ID for tracing
|
||||
```
|
||||
|
||||
**Flow:**
|
||||
1. Parse envelope
|
||||
2. Check transport type
|
||||
3. If direct: decode Base64 payload
|
||||
4. If link: fetch with exponential backoff
|
||||
5. Deserialize Arrow IPC with zero-copy
|
||||
**Process Flow:**
|
||||
1. Parse the JSON envelope to extract the `payloads` array
|
||||
2. Iterate through each payload in `payloads`
|
||||
3. For each payload:
|
||||
- Determine transport type (`direct` or `link`)
|
||||
- If `direct`: decode Base64 data from the message
|
||||
- If `link`: fetch data from URL using exponential backoff
|
||||
- Deserialize based on payload type (`dictionary`, `table`, `binary`, etc.)
|
||||
4. Return list of `(dataname, data, type)` tuples
|
||||
|
||||
## Scenario Implementations
|
||||
|
||||
### Scenario 1: Command & Control (Small JSON)
|
||||
### Scenario 1: Command & Control (Small Dictionary)
|
||||
|
||||
**Julia (Receiver):**
|
||||
```julia
|
||||
@@ -206,8 +425,8 @@ async function SmartReceive(msg, options = {})
|
||||
|
||||
**JavaScript (Sender):**
|
||||
```javascript
|
||||
// Create small JSON config
|
||||
// Send via SmartSend with type="json"
|
||||
// Create small dictionary config
|
||||
// Send via smartsend with type="dictionary"
|
||||
```
|
||||
|
||||
### Scenario 2: Deep Dive Analysis (Large Arrow Table)
|
||||
@@ -235,7 +454,7 @@ async function SmartReceive(msg, options = {})
|
||||
```javascript
|
||||
// Capture audio chunk
|
||||
// Send as binary with metadata headers
|
||||
// Use SmartSend with type="audio"
|
||||
// Use smartsend with type="audio"
|
||||
```
|
||||
|
||||
**Julia (Receiver):**
|
||||
@@ -260,6 +479,76 @@ async function SmartReceive(msg, options = {})
|
||||
// Process historical and real-time messages
|
||||
```
|
||||
|
||||
### Scenario 5: Selection (Low Bandwidth)
|
||||
|
||||
**Focus:** Small Arrow tables, Julia to JavaScript. The Action: Julia wants to send a small DataFrame to show on a JavaScript dashboard for the user to choose.
|
||||
|
||||
**Julia (Sender):**
|
||||
```julia
|
||||
# Create small DataFrame (e.g., 50KB - 500KB)
|
||||
# Convert to Arrow IPC stream
|
||||
# Check payload size (< 1MB threshold)
|
||||
# Publish directly to NATS with Base64-encoded payload
|
||||
# Include metadata for dashboard selection context
|
||||
```
|
||||
|
||||
**JavaScript (Receiver):**
|
||||
```javascript
|
||||
// Receive NATS message with direct transport
|
||||
// Decode Base64 payload
|
||||
// Parse Arrow IPC with zero-copy
|
||||
// Load into selection UI component (e.g., dropdown, table)
|
||||
// User makes selection
|
||||
// Send selection back to Julia
|
||||
```
|
||||
|
||||
**Use Case:** Julia server generates a list of available options (e.g., file selections, configuration presets) as a small DataFrame and sends to JavaScript dashboard for user selection. The selection is then sent back to Julia for processing.
|
||||
|
||||
### Scenario 6: Chat System
|
||||
|
||||
**Focus:** Every conversational message is composed of any number and any combination of components, spanning the full spectrum from small to large. This includes text, images, audio, video, tables, and files—specifically accommodating everything from brief snippets to high-resolution images, large audio files, extensive tables, and massive documents. Support for claim-check delivery and full bi-directional messaging.
|
||||
|
||||
**Multi-Payload Support:** The system supports mixed-payload messages where a single message can contain multiple payloads with different transport strategies. The `smartreceive` function iterates through all payloads in the envelope and processes each according to its transport type.
|
||||
|
||||
**Julia (Sender/Receiver):**
|
||||
```julia
|
||||
# Build chat message with mixed payloads:
|
||||
# - Text: direct transport (Base64)
|
||||
# - Small images: direct transport (Base64)
|
||||
# - Large images: link transport (HTTP URL)
|
||||
# - Audio/video: link transport (HTTP URL)
|
||||
# - Tables: direct or link depending on size
|
||||
# - Files: link transport (HTTP URL)
|
||||
#
|
||||
# Each payload uses appropriate transport strategy:
|
||||
# - Size < 1MB → direct (NATS + Base64)
|
||||
# - Size >= 1MB → link (HTTP upload + NATS URL)
|
||||
#
|
||||
# Include claim-check metadata for delivery tracking
|
||||
# Support bidirectional messaging with replyTo fields
|
||||
```
|
||||
|
||||
**JavaScript (Sender/Receiver):**
|
||||
```javascript
|
||||
// Build chat message with mixed content:
|
||||
// - User input text: direct transport
|
||||
// - Selected image: check size, use appropriate transport
|
||||
// - Audio recording: link transport for large files
|
||||
// - File attachment: link transport
|
||||
//
|
||||
// Parse received message:
|
||||
// - Direct payloads: decode Base64
|
||||
// - Link payloads: fetch from HTTP with exponential backoff
|
||||
// - Deserialize all payloads appropriately
|
||||
//
|
||||
// Render mixed content in chat interface
|
||||
// Support bidirectional reply with claim-check delivery confirmation
|
||||
```
|
||||
|
||||
**Use Case:** Full-featured chat system supporting rich media. User can send text, small images directly, or upload large files that get uploaded to HTTP server and referenced via URLs. Claim-check pattern ensures reliable delivery tracking for all message components.
|
||||
|
||||
**Implementation Note:** The `smartreceive` function iterates through all payloads in the envelope and processes each according to its transport type. See the standard API format in Section 1: `msgEnvelope_v1` supports `AbstractArray{msgPayload_v1}` for multiple payloads.
|
||||
|
||||
## Performance Considerations
|
||||
|
||||
### Zero-Copy Reading
|
||||
@@ -280,8 +569,8 @@ async function SmartReceive(msg, options = {})
|
||||
## Testing Strategy
|
||||
|
||||
### Unit Tests
|
||||
- Test SmartSend with various payload sizes
|
||||
- Test SmartReceive with direct and link transport
|
||||
- Test smartsend with various payload sizes
|
||||
- Test smartreceive with direct and link transport
|
||||
- Test Arrow IPC serialization/deserialization
|
||||
|
||||
### Integration Tests
|
||||
|
||||
599
docs/implementation.md
Normal file
599
docs/implementation.md
Normal file
@@ -0,0 +1,599 @@
|
||||
# Implementation Guide: Bi-Directional Data Bridge
|
||||
|
||||
## Overview
|
||||
|
||||
This document describes the implementation of the high-performance, bi-directional data bridge between Julia and JavaScript services using NATS (Core & JetStream), implementing the Claim-Check pattern for large payloads.
|
||||
|
||||
### Multi-Payload Support
|
||||
|
||||
The implementation uses a **standardized list-of-tuples format** for all payload operations. **Even when sending a single payload, the user must wrap it in a list.**
|
||||
|
||||
**API Standard:**
|
||||
```julia
|
||||
# Input format for smartsend (always a list of tuples with type info)
|
||||
[(dataname1, data1, type1), (dataname2, data2, type2), ...]
|
||||
|
||||
# Output format for smartreceive (always returns a list of tuples with type info)
|
||||
[(dataname1, data1, type1), (dataname2, data2, type2), ...]
|
||||
```
|
||||
|
||||
Where `type` can be: `"text"`, `"dictionary"`, `"table"`, `"image"`, `"audio"`, `"video"`, `"binary"`
|
||||
|
||||
**Examples:**
|
||||
```julia
|
||||
# Single payload - still wrapped in a list (type is required as third element)
|
||||
smartsend("/test", [(dataname1, data1, "text")], ...)
|
||||
|
||||
# Multiple payloads in one message (each payload has its own type)
|
||||
smartsend("/test", [(dataname1, data1, "dictionary"), (dataname2, data2, "table")], ...)
|
||||
|
||||
# Receive always returns a list with type info
|
||||
payloads = smartreceive(msg, ...)
|
||||
# payloads = [(dataname1, data1, "text"), (dataname2, data2, "table"), ...]
|
||||
```
|
||||
|
||||
## Architecture
|
||||
|
||||
The implementation follows the Claim-Check pattern:
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────────┐
|
||||
│ SmartSend Function │
|
||||
└─────────────────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────────────────┐
|
||||
│ Is payload size < 1MB? │
|
||||
└─────────────────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
┌─────────────────┴─────────────────┐
|
||||
▼ ▼
|
||||
┌─────────────────┐ ┌─────────────────┐
|
||||
│ Direct Path │ │ Link Path │
|
||||
│ (< 1MB) │ │ (> 1MB) │
|
||||
│ │ │ │
|
||||
│ • Serialize to │ │ • Serialize to │
|
||||
│ IOBuffer │ │ IOBuffer │
|
||||
│ • Base64 encode │ │ • Upload to │
|
||||
│ • Publish to │ │ HTTP Server │
|
||||
│ NATS │ │ • Publish to │
|
||||
│ │ │ NATS with URL │
|
||||
└─────────────────┘ └─────────────────┘
|
||||
```
|
||||
|
||||
## Files
|
||||
|
||||
### Julia Module: [`src/julia_bridge.jl`](../src/julia_bridge.jl)
|
||||
|
||||
The Julia implementation provides:
|
||||
|
||||
- **[`MessageEnvelope`](../src/julia_bridge.jl)**: Struct for the unified JSON envelope
|
||||
- **[`SmartSend()`](../src/julia_bridge.jl)**: Handles transport selection based on payload size
|
||||
- **[`SmartReceive()`](../src/julia_bridge.jl)**: Handles both direct and link transport
|
||||
|
||||
### JavaScript Module: [`src/NATSBridge.js`](../src/NATSBridge.js)
|
||||
|
||||
The JavaScript implementation provides:
|
||||
|
||||
- **`MessageEnvelope` class**: For the unified JSON envelope
|
||||
- **`MessagePayload` class**: For individual payload representation
|
||||
- **[`smartsend()`](../src/NATSBridge.js)**: Handles transport selection based on payload size
|
||||
- **[`smartreceive()`](../src/NATSBridge.js)**: Handles both direct and link transport
|
||||
|
||||
## Installation
|
||||
|
||||
### Julia Dependencies
|
||||
|
||||
```julia
|
||||
using Pkg
|
||||
Pkg.add("NATS")
|
||||
Pkg.add("Arrow")
|
||||
Pkg.add("JSON3")
|
||||
Pkg.add("HTTP")
|
||||
Pkg.add("UUIDs")
|
||||
Pkg.add("Dates")
|
||||
```
|
||||
|
||||
### JavaScript Dependencies
|
||||
|
||||
```bash
|
||||
npm install nats.js apache-arrow uuid base64-url
|
||||
```
|
||||
|
||||
## Usage Tutorial
|
||||
|
||||
### Step 1: Start NATS Server
|
||||
|
||||
```bash
|
||||
docker run -p 4222:4222 nats:latest
|
||||
```
|
||||
|
||||
### Step 2: Start HTTP File Server (optional)
|
||||
|
||||
```bash
|
||||
# Create a directory for file uploads
|
||||
mkdir -p /tmp/fileserver
|
||||
|
||||
# Use any HTTP server that supports POST for file uploads
|
||||
# Example: Python's built-in server
|
||||
python3 -m http.server 8080 --directory /tmp/fileserver
|
||||
```
|
||||
|
||||
### Step 3: Run Test Scenarios
|
||||
|
||||
```bash
|
||||
# Scenario 1: Command & Control (JavaScript sender)
|
||||
node test/scenario1_command_control.js
|
||||
|
||||
# Scenario 2: Large Arrow Table (JavaScript sender)
|
||||
node test/scenario2_large_table.js
|
||||
|
||||
# Scenario 3: Julia-to-Julia communication
|
||||
# Run both Julia and JavaScript versions
|
||||
julia test/scenario3_julia_to_julia.jl
|
||||
node test/scenario3_julia_to_julia.js
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
### Scenario 0: Basic Multi-Payload Example
|
||||
|
||||
#### Julia (Sender)
|
||||
```julia
|
||||
using NATSBridge
|
||||
|
||||
# Send multiple payloads in one message (type is required per payload)
|
||||
smartsend(
|
||||
"/test",
|
||||
[("dataname1", data1, "dictionary"), ("dataname2", data2, "table")],
|
||||
nats_url="nats://localhost:4222",
|
||||
fileserver_url="http://localhost:8080",
|
||||
metadata=Dict("custom_key" => "custom_value")
|
||||
)
|
||||
|
||||
# Even single payload must be wrapped in a list with type
|
||||
smartsend("/test", [("single_data", mydata, "dictionary")])
|
||||
```
|
||||
|
||||
#### Julia (Receiver)
|
||||
```julia
|
||||
using NATSBridge
|
||||
|
||||
# Receive returns a list of payloads with type info
|
||||
payloads = smartreceive(msg, "http://localhost:8080")
|
||||
# payloads = [(dataname1, data1, "dictionary"), (dataname2, data2, "table"), ...]
|
||||
```
|
||||
|
||||
### Scenario 1: Command & Control (Small JSON)
|
||||
|
||||
#### JavaScript (Sender)
|
||||
```javascript
|
||||
const { smartsend } = require('./src/NATSBridge');
|
||||
|
||||
// Single payload wrapped in a list
|
||||
const config = [{
|
||||
dataname: "config",
|
||||
data: { step_size: 0.01, iterations: 1000 },
|
||||
type: "dictionary"
|
||||
}];
|
||||
|
||||
await smartsend("control", config, {
|
||||
correlationId: "unique-id"
|
||||
});
|
||||
|
||||
// Multiple payloads
|
||||
const configs = [
|
||||
{
|
||||
dataname: "config1",
|
||||
data: { step_size: 0.01 },
|
||||
type: "dictionary"
|
||||
},
|
||||
{
|
||||
dataname: "config2",
|
||||
data: { iterations: 1000 },
|
||||
type: "dictionary"
|
||||
}
|
||||
];
|
||||
|
||||
await smartsend("control", configs);
|
||||
```
|
||||
|
||||
#### Julia (Receiver)
|
||||
```julia
|
||||
using NATS
|
||||
using JSON3
|
||||
|
||||
# Subscribe to control subject
|
||||
subscribe(nats, "control") do msg
|
||||
env = MessageEnvelope(String(msg.data))
|
||||
config = JSON3.read(env.payload)
|
||||
|
||||
# Execute simulation with parameters
|
||||
step_size = config.step_size
|
||||
iterations = config.iterations
|
||||
|
||||
# Send acknowledgment
|
||||
response = Dict("status" => "Running", "correlation_id" => env.correlation_id)
|
||||
publish(nats, "control_response", JSON3.stringify(response))
|
||||
end
|
||||
```
|
||||
|
||||
### JavaScript (Receiver)
|
||||
```javascript
|
||||
const { smartreceive } = require('./src/NATSBridge');
|
||||
|
||||
// Subscribe to messages
|
||||
const nc = await connect({ servers: ['nats://localhost:4222'] });
|
||||
const sub = nc.subscribe("control");
|
||||
|
||||
for await (const msg of sub) {
|
||||
const result = await smartreceive(msg);
|
||||
|
||||
// Process the result
|
||||
for (const { dataname, data, type } of result) {
|
||||
console.log(`Received ${dataname} of type ${type}`);
|
||||
console.log(`Data: ${JSON.stringify(data)}`);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Scenario 2: Deep Dive Analysis (Large Arrow Table)
|
||||
|
||||
#### Julia (Sender)
|
||||
```julia
|
||||
using Arrow
|
||||
using DataFrames
|
||||
|
||||
# Create large DataFrame
|
||||
df = DataFrame(
|
||||
id = 1:10_000_000,
|
||||
value = rand(10_000_000),
|
||||
category = rand(["A", "B", "C"], 10_000_000)
|
||||
)
|
||||
|
||||
# Send via SmartSend - wrapped in a list (type is part of each tuple)
|
||||
await SmartSend("analysis_results", [("table_data", df, "table")]);
|
||||
```
|
||||
|
||||
#### JavaScript (Receiver)
|
||||
```javascript
|
||||
const { smartreceive } = require('./src/NATSBridge');
|
||||
|
||||
const result = await smartreceive(msg);
|
||||
|
||||
// Use table data for visualization with Perspective.js or D3
|
||||
// Note: Tables are sent as arrays of objects in JavaScript
|
||||
const table = result;
|
||||
```
|
||||
|
||||
### Scenario 3: Live Binary Processing
|
||||
|
||||
#### JavaScript (Sender)
|
||||
```javascript
|
||||
const { smartsend } = require('./src/NATSBridge');
|
||||
|
||||
// Binary data wrapped in a list
|
||||
const binaryData = [{
|
||||
dataname: "audio_chunk",
|
||||
data: binaryBuffer, // ArrayBuffer or Uint8Array
|
||||
type: "binary"
|
||||
}];
|
||||
|
||||
await smartsend("binary_input", binaryData, {
|
||||
metadata: {
|
||||
sample_rate: 44100,
|
||||
channels: 1
|
||||
}
|
||||
});
|
||||
```
|
||||
|
||||
#### Julia (Receiver)
|
||||
```julia
|
||||
using WAV
|
||||
using DSP
|
||||
|
||||
# Receive binary data
|
||||
function process_binary(data)
|
||||
# Perform FFT or AI transcription
|
||||
spectrum = fft(data)
|
||||
|
||||
# Send results back (JSON + Arrow table)
|
||||
results = Dict("transcription" => "sample text", "spectrum" => spectrum)
|
||||
await SmartSend("binary_output", results, "json")
|
||||
end
|
||||
```
|
||||
|
||||
### JavaScript (Receiver)
|
||||
```javascript
|
||||
const { smartreceive } = require('./src/NATSBridge');
|
||||
|
||||
// Receive binary data
|
||||
function process_binary(msg) {
|
||||
const result = await smartreceive(msg);
|
||||
|
||||
// Process the binary data
|
||||
for (const { dataname, data, type } of result) {
|
||||
if (type === "binary") {
|
||||
// data is an ArrayBuffer or Uint8Array
|
||||
console.log(`Received binary data: ${dataname}, size: ${data.length}`);
|
||||
// Perform FFT or AI transcription here
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Scenario 4: Catch-Up (JetStream)
|
||||
|
||||
#### Julia (Producer)
|
||||
```julia
|
||||
using NATSBridge
|
||||
|
||||
function publish_health_status(nats_url)
|
||||
# Send status wrapped in a list (type is part of each tuple)
|
||||
status = Dict("cpu" => rand(), "memory" => rand())
|
||||
smartsend("health", [("status", status, "dictionary")], nats_url=nats_url)
|
||||
sleep(5) # Every 5 seconds
|
||||
end
|
||||
```
|
||||
|
||||
#### JavaScript (Consumer)
|
||||
```javascript
|
||||
const { connect } = require('nats');
|
||||
const { smartreceive } = require('./src/NATSBridge');
|
||||
|
||||
const nc = await connect({ servers: ['nats://localhost:4222'] });
|
||||
const js = nc.jetstream();
|
||||
|
||||
// Request replay from last 10 minutes
|
||||
const consumer = await js.pullSubscribe("health", {
|
||||
durable_name: "catchup",
|
||||
max_batch: 100,
|
||||
max_ack_wait: 30000
|
||||
});
|
||||
|
||||
// Process historical and real-time messages
|
||||
for await (const msg of consumer) {
|
||||
const result = await smartreceive(msg);
|
||||
// result contains the list of payloads
|
||||
// Each payload has: dataname, data, type
|
||||
msg.ack();
|
||||
}
|
||||
```
|
||||
|
||||
### Scenario 5: Selection (Low Bandwidth)
|
||||
|
||||
**Focus:** Small Arrow tables, Julia to JavaScript. The Action: Julia wants to send a small DataFrame to show on a JavaScript dashboard for the user to choose.
|
||||
|
||||
**Julia (Sender):**
|
||||
```julia
|
||||
using NATSBridge
|
||||
using DataFrames
|
||||
|
||||
# Create small DataFrame (e.g., 50KB - 500KB)
|
||||
options_df = DataFrame(
|
||||
id = 1:10,
|
||||
name = ["Option A", "Option B", "Option C", "Option D", "Option E",
|
||||
"Option F", "Option G", "Option H", "Option I", "Option J"],
|
||||
description = ["Description A", "Description B", "Description C", "Description D", "Description E",
|
||||
"Description F", "Description G", "Description H", "Description I", "Description J"]
|
||||
)
|
||||
|
||||
# Convert to Arrow IPC stream
|
||||
# Check payload size (< 1MB threshold)
|
||||
# Publish directly to NATS with Base64-encoded payload
|
||||
# Include metadata for dashboard selection context
|
||||
smartsend(
|
||||
"dashboard.selection",
|
||||
[("options_table", options_df, "table")],
|
||||
nats_url="nats://localhost:4222",
|
||||
metadata=Dict("context" => "user_selection")
|
||||
)
|
||||
```
|
||||
|
||||
**JavaScript (Receiver):**
|
||||
```javascript
|
||||
const { smartreceive, smartsend } = require('./src/NATSBridge');
|
||||
|
||||
// Receive NATS message with direct transport
|
||||
const result = await smartreceive(msg);
|
||||
|
||||
// Decode Base64 payload (for direct transport)
|
||||
// For tables, data is an array of objects
|
||||
const table = result; // Array of objects
|
||||
|
||||
// User makes selection
|
||||
const selection = uiComponent.getSelectedOption();
|
||||
|
||||
// Send selection back to Julia
|
||||
await smartsend("dashboard.response", [
|
||||
{ dataname: "selected_option", data: selection, type: "dictionary" }
|
||||
]);
|
||||
```
|
||||
|
||||
**Use Case:** Julia server generates a list of available options (e.g., file selections, configuration presets) as a small DataFrame and sends to JavaScript dashboard for user selection. The selection is then sent back to Julia for processing.
|
||||
|
||||
### Scenario 6: Chat System
|
||||
|
||||
**Focus:** Every conversational message is composed of any number and any combination of components, spanning the full spectrum from small to large. This includes text, images, audio, video, tables, and files—specifically accommodating everything from brief snippets to high-resolution images, large audio files, extensive tables, and massive documents. Support for claim-check delivery and full bi-directional messaging.
|
||||
|
||||
**Multi-Payload Support:** The system supports mixed-payload messages where a single message can contain multiple payloads with different transport strategies. The `smartreceive` function iterates through all payloads in the envelope and processes each according to its transport type.
|
||||
|
||||
**Julia (Sender/Receiver):**
|
||||
```julia
|
||||
using NATSBridge
|
||||
using DataFrames
|
||||
|
||||
# Build chat message with mixed payloads:
|
||||
# - Text: direct transport (Base64)
|
||||
# - Small images: direct transport (Base64)
|
||||
# - Large images: link transport (HTTP URL)
|
||||
# - Audio/video: link transport (HTTP URL)
|
||||
# - Tables: direct or link depending on size
|
||||
# - Files: link transport (HTTP URL)
|
||||
#
|
||||
# Each payload uses appropriate transport strategy:
|
||||
# - Size < 1MB → direct (NATS + Base64)
|
||||
# - Size >= 1MB → link (HTTP upload + NATS URL)
|
||||
#
|
||||
# Include claim-check metadata for delivery tracking
|
||||
# Support bidirectional messaging with replyTo fields
|
||||
|
||||
# Example: Chat with text, small image, and large file
|
||||
chat_message = [
|
||||
("message_text", "Hello, this is a test message!", "text"),
|
||||
("user_avatar", image_bytes, "image"), # Small image, direct transport
|
||||
("large_document", large_file_bytes, "binary") # Large file, link transport
|
||||
]
|
||||
|
||||
smartsend(
|
||||
"chat.room123",
|
||||
chat_message,
|
||||
nats_url="nats://localhost:4222",
|
||||
msg_purpose="chat",
|
||||
reply_to="chat.room123.responses"
|
||||
)
|
||||
```
|
||||
|
||||
**JavaScript (Sender/Receiver):**
|
||||
```javascript
|
||||
const { smartsend, smartreceive } = require('./src/NATSBridge');
|
||||
|
||||
// Build chat message with mixed content:
|
||||
// - User input text: direct transport
|
||||
// - Selected image: check size, use appropriate transport
|
||||
// - Audio recording: link transport for large files
|
||||
// - File attachment: link transport
|
||||
//
|
||||
// Parse received message:
|
||||
// - Direct payloads: decode Base64
|
||||
// - Link payloads: fetch from HTTP with exponential backoff
|
||||
// - Deserialize all payloads appropriately
|
||||
//
|
||||
// Render mixed content in chat interface
|
||||
// Support bidirectional reply with claim-check delivery confirmation
|
||||
|
||||
// Example: Send chat with mixed content
|
||||
const message = [
|
||||
{
|
||||
dataname: "text",
|
||||
data: "Hello from JavaScript!",
|
||||
type: "text"
|
||||
},
|
||||
{
|
||||
dataname: "image",
|
||||
data: selectedImageBuffer, // Small image (ArrayBuffer or Uint8Array)
|
||||
type: "image"
|
||||
},
|
||||
{
|
||||
dataname: "audio",
|
||||
data: audioUrl, // Large audio, link transport
|
||||
type: "audio"
|
||||
}
|
||||
];
|
||||
|
||||
await smartsend("chat.room123", message);
|
||||
```
|
||||
|
||||
**Use Case:** Full-featured chat system supporting rich media. User can send text, small images directly, or upload large files that get uploaded to HTTP server and referenced via URLs. Claim-check pattern ensures reliable delivery tracking for all message components.
|
||||
|
||||
**Implementation Note:** The `smartreceive` function iterates through all payloads in the envelope and processes each according to its transport type. See the standard API format in Section 1: `msgEnvelope_v1` supports `AbstractArray{msgPayload_v1}` for multiple payloads.
|
||||
|
||||
## Configuration
|
||||
|
||||
### Environment Variables
|
||||
|
||||
| Variable | Default | Description |
|
||||
|----------|---------|-------------|
|
||||
| `NATS_URL` | `nats://localhost:4222` | NATS server URL |
|
||||
| `FILESERVER_URL` | `http://localhost:8080` | HTTP file server URL (base URL without `/upload` suffix) |
|
||||
| `SIZE_THRESHOLD` | `1_000_000` | Size threshold in bytes (1MB) |
|
||||
|
||||
### Message Envelope Schema
|
||||
|
||||
```json
|
||||
{
|
||||
"correlationId": "uuid-v4-string",
|
||||
"msgId": "uuid-v4-string",
|
||||
"timestamp": "2024-01-15T10:30:00Z",
|
||||
|
||||
"sendTo": "topic/subject",
|
||||
"msgPurpose": "ACK | NACK | updateStatus | shutdown | chat",
|
||||
"senderName": "agent-wine-web-frontend",
|
||||
"senderId": "uuid4",
|
||||
"receiverName": "agent-backend",
|
||||
"receiverId": "uuid4",
|
||||
"replyTo": "topic",
|
||||
"replyToMsgId": "uuid4",
|
||||
"BrokerURL": "nats://localhost:4222",
|
||||
|
||||
"metadata": {
|
||||
"content_type": "application/octet-stream",
|
||||
"content_length": 123456
|
||||
},
|
||||
|
||||
"payloads": [
|
||||
{
|
||||
"id": "uuid4",
|
||||
"dataname": "login_image",
|
||||
"type": "image",
|
||||
"transport": "direct",
|
||||
"encoding": "base64",
|
||||
"size": 15433,
|
||||
"data": "base64-encoded-string",
|
||||
"metadata": {
|
||||
"checksum": "sha256_hash"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Performance Considerations
|
||||
|
||||
### Zero-Copy Reading
|
||||
- Use Arrow's memory-mapped file reading
|
||||
- Avoid unnecessary data copying during deserialization
|
||||
- Use Apache Arrow's native IPC reader
|
||||
|
||||
### Exponential Backoff
|
||||
- Maximum retry count: 5
|
||||
- Base delay: 100ms, max delay: 5000ms
|
||||
- Implemented in both Julia and JavaScript implementations
|
||||
|
||||
### Correlation ID Logging
|
||||
- Log correlation_id at every stage
|
||||
- Include: send, receive, serialize, deserialize
|
||||
- Use structured logging format
|
||||
|
||||
## Testing
|
||||
|
||||
Run the test scripts:
|
||||
|
||||
```bash
|
||||
# Scenario 1: Command & Control (JavaScript sender)
|
||||
node test/scenario1_command_control.js
|
||||
|
||||
# Scenario 2: Large Arrow Table (JavaScript sender)
|
||||
node test/scenario2_large_table.js
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
1. **NATS Connection Failed**
|
||||
- Ensure NATS server is running
|
||||
- Check NATS_URL configuration
|
||||
|
||||
2. **HTTP Upload Failed**
|
||||
- Ensure file server is running
|
||||
- Check FILESERVER_URL configuration
|
||||
- Verify upload permissions
|
||||
|
||||
3. **Arrow IPC Deserialization Error**
|
||||
- Ensure data is properly serialized to Arrow format
|
||||
- Check Arrow version compatibility
|
||||
|
||||
## License
|
||||
|
||||
MIT
|
||||
43
etc.jl
43
etc.jl
@@ -1,42 +1,21 @@
|
||||
Check architecture.jl, NATSBridge.jl and its test files:
|
||||
- test_julia_to_julia_table_receiver.jl
|
||||
- test_julia_to_julia_table_sender.jl.
|
||||
|
||||
""" fileServerURL = "http://192.168.88.104:8080"
|
||||
filepath = "/home/ton/docker-apps/sendreceive/image/test.zip"
|
||||
filename = basename(filepath)
|
||||
filebytes = read(filepath)
|
||||
Now I want to test sending a mix-content message from Julia serviceA to Julia serviceB, for example, a chat system.
|
||||
The test message must show that any combination and any number and any data size of text | json | table | image | audio | video | binary can be send and receive.
|
||||
|
||||
plik_oneshot_upload - Upload a single file to a plik server using one-shot mode
|
||||
Can you write me the following test files:
|
||||
- test_julia_to_julia_mix_receiver.jl
|
||||
- test_julia_to_julia_mix_sender.jl
|
||||
|
||||
This function uploads a raw byte array to a plik server in one-shot mode (no upload session).
|
||||
It first creates a one-shot upload session by sending a POST request with `{"OneShot": true}`,
|
||||
retrieves an upload ID and token, then uploads the file data as multipart form data using the token.
|
||||
|
||||
The function handles the entire flow:
|
||||
1. Obtains an upload ID and token from the server
|
||||
2. Uploads the provided binary data as a file using the `X-UploadToken` header
|
||||
3. Returns identifiers and download URL for the uploaded file
|
||||
|
||||
# Arguments:
|
||||
- `fileServerURL::String` - Base URL of the plik server (e.g., `"http://192.168.88.104:8080"`)
|
||||
- `filename::String` - Name of the file being uploaded
|
||||
- `data::Vector{UInt8}` - Raw byte data of the file content
|
||||
|
||||
# Return:
|
||||
- A named tuple with fields:
|
||||
- `uploadid::String` - ID of the one-shot upload session
|
||||
- `fileid::String` - ID of the uploaded file within the session
|
||||
- `downloadurl::String` - Full URL to download the uploaded file
|
||||
1. create a tutorial file "tutorial_julia.md" for NATSBridge.jl
|
||||
2. create a walkthrough file "walkthrough_julia.md" for NATSBridge.jl
|
||||
|
||||
# Example
|
||||
```jldoctest
|
||||
using HTTP, JSON
|
||||
You may consult architecture.md for more info.
|
||||
|
||||
# Example data: "Hello" as bytes
|
||||
data = collect("Hello World!" |> collect |> CodeUnits |> collect)
|
||||
|
||||
# Upload to local plik server
|
||||
result = plik_oneshot_upload("http://192.168.88.104:8080", "hello.txt", data)
|
||||
|
||||
# Download URL for the uploaded file
|
||||
println(result.downloadurl)
|
||||
```
|
||||
"""
|
||||
28
package.json
Normal file
28
package.json
Normal file
@@ -0,0 +1,28 @@
|
||||
{
|
||||
"name": "natsbridge",
|
||||
"version": "1.0.0",
|
||||
"description": "Bi-Directional Data Bridge for JavaScript using NATS",
|
||||
"main": "src/NATSBridge.js",
|
||||
"scripts": {
|
||||
"test": "echo \"Error: no test specified\" && exit 1",
|
||||
"lint": "eslint src/*.js test/*.js"
|
||||
},
|
||||
"keywords": [
|
||||
"nats",
|
||||
"message-broker",
|
||||
"bridge",
|
||||
"arrow",
|
||||
"serialization"
|
||||
],
|
||||
"author": "",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"nats": "^2.9.0",
|
||||
"apache-arrow": "^14.0.0",
|
||||
"uuid": "^9.0.0"
|
||||
},
|
||||
"devDependencies": {
|
||||
"eslint": "^8.0.0",
|
||||
"jest": "^29.0.0"
|
||||
}
|
||||
}
|
||||
@@ -3,96 +3,197 @@
|
||||
# This module provides functionality for sending and receiving data across network boundaries
|
||||
# using NATS as the message bus, with support for both direct payload transport and
|
||||
# URL-based transport for larger payloads.
|
||||
#
|
||||
# File Server Handler Architecture:
|
||||
# The system uses handler functions to abstract file server operations, allowing support
|
||||
# for different file server implementations (e.g., Plik, AWS S3, custom HTTP server).
|
||||
#
|
||||
# Handler Function Signatures:
|
||||
#
|
||||
# ```julia
|
||||
# # Upload handler - uploads data to file server and returns URL
|
||||
# fileserverUploadHandler(fileserver_url::String, dataname::String, data::Vector{UInt8})::Dict{String, Any}
|
||||
#
|
||||
# # Download handler - fetches data from file server URL with exponential backoff
|
||||
# fileserverDownloadHandler(url::String, max_retries::Int, base_delay::Int, max_delay::Int, correlation_id::String)::Vector{UInt8}
|
||||
# ```
|
||||
#
|
||||
# Multi-Payload Support (Standard API):
|
||||
# The system uses a standardized list-of-tuples format for all payload operations.
|
||||
# Even when sending a single payload, the user must wrap it in a list.
|
||||
#
|
||||
# API Standard:
|
||||
# ```julia
|
||||
# # Input format for smartsend (always a list of tuples with type info)
|
||||
# [(dataname1, data1, type1), (dataname2, data2, type2), ...]
|
||||
#
|
||||
# # Output format for smartreceive (always returns a list of tuples)
|
||||
# [(dataname1, data1, type1), (dataname2, data2, type2), ...]
|
||||
# ```
|
||||
#
|
||||
# Supported types: "text", "dictionary", "table", "image", "audio", "video", "binary"
|
||||
|
||||
module NATSBridge
|
||||
|
||||
using NATS, JSON, Arrow, HTTP, UUIDs, Dates, Base64
|
||||
using Revise
|
||||
using NATS, JSON, Arrow, HTTP, UUIDs, Dates, Base64, PrettyPrinting
|
||||
# ---------------------------------------------- 100 --------------------------------------------- #
|
||||
|
||||
# Constants
|
||||
const DEFAULT_SIZE_THRESHOLD = 1_000_000 # 1MB - threshold for switching from direct to link transport
|
||||
const DEFAULT_NATS_URL = "nats://localhost:4222" # Default NATS server URL
|
||||
const DEFAULT_FILESERVER_URL = "http://localhost:8080/upload" # Default HTTP file server URL for link transport
|
||||
const DEFAULT_FILESERVER_URL = "http://localhost:8080" # Default HTTP file server URL for link transport
|
||||
|
||||
|
||||
""" Struct for the unified JSON envelope
|
||||
This struct represents a standardized message format that can carry either
|
||||
direct payload data or a URL reference, allowing flexible transport strategies
|
||||
based on payload size and requirements.
|
||||
"""
|
||||
struct MessageEnvelope
|
||||
correlation_id::String # Unique identifier to track messages across systems
|
||||
type::String # Data type indicator (e.g., "json", "table", "binary")
|
||||
transport::String # Transport strategy: "direct" (base64 encoded bytes) or "link" (URL reference)
|
||||
payload::Union{String, Nothing} # Base64-encoded payload for direct transport
|
||||
url::Union{String, Nothing} # URL reference for link transport
|
||||
metadata::Dict{String, Any} # Additional metadata about the payload
|
||||
struct msgPayload_v1
|
||||
id::String # id of this payload e.g. "uuid4"
|
||||
dataname::String # name of this payload e.g. "login_image"
|
||||
type::String # this payload type. Can be "text | dictionary | table | image | audio | video | binary"
|
||||
transport::String # "direct | link"
|
||||
encoding::String # "none | json | base64 | arrow-ipc"
|
||||
size::Integer # data size in bytes e.g. 15433
|
||||
data::Any # payload data in case of direct transport or a URL in case of link
|
||||
metadata::Dict{String, Any} # Dict("checksum" => "sha256_hash", ...) This metadata is for this payload
|
||||
end
|
||||
|
||||
""" Constructor for MessageEnvelope with keyword arguments and defaults
|
||||
This constructor provides a convenient way to create an envelope using keyword arguments,
|
||||
automatically generating a correlation ID if not provided, and defaulting to "json" type
|
||||
and "direct" transport.
|
||||
"""
|
||||
function MessageEnvelope(
|
||||
; correlation_id::String = string(uuid4()), # Generate unique ID if not provided
|
||||
type::String = "json", # Default data type
|
||||
transport::String = "direct", # Default transport method
|
||||
payload::Union{String, Nothing} = nothing, # No payload by default
|
||||
url::Union{String, Nothing} = nothing, # No URL by default
|
||||
metadata::Dict{String, Any} = Dict{String, Any}() # Empty metadata by default
|
||||
# constructor
|
||||
function msgPayload_v1(
|
||||
data::Any,
|
||||
type::String;
|
||||
id::String = "",
|
||||
dataname::String = string(uuid4()),
|
||||
transport::String = "direct",
|
||||
encoding::String = "none",
|
||||
size::Integer = 0,
|
||||
metadata::Dict{String, T} = Dict{String, Any}()
|
||||
) where {T<:Any}
|
||||
return msgPayload_v1(
|
||||
id,
|
||||
dataname,
|
||||
type,
|
||||
transport,
|
||||
encoding,
|
||||
size,
|
||||
data,
|
||||
metadata
|
||||
)
|
||||
end
|
||||
|
||||
|
||||
struct msgEnvelope_v1
|
||||
correlationId::String # Unique identifier to track messages across systems. Many senders can talk about the same topic.
|
||||
msgId::String # this message id
|
||||
timestamp::String # message published timestamp. string(Dates.now())
|
||||
|
||||
sendTo::String # topic/subject the sender sends to e.g. "/agent/wine/api/v1/prompt"
|
||||
msgPurpose::String # purpose of this message e.g. "ACK | NACK | updateStatus | shutdown | ..."
|
||||
senderName::String # sender name (String) e.g. "agent-wine-web-frontend"
|
||||
senderId::String # sender id e.g. uuid4snakecase()
|
||||
receiverName::String # msg receiver name (String) e.g. "agent-backend"
|
||||
receiverId::String # msg receiver id, nothing means everyone in the topic e.g. uuid4snakecase()
|
||||
|
||||
replyTo::String # sender ask receiver to reply to this topic
|
||||
replyToMsgId::String # the message id this message is replying to
|
||||
brokerURL::String # mqtt/NATS server address
|
||||
|
||||
metadata::Dict{String, Any}
|
||||
payloads::AbstractArray{msgPayload_v1} # multiple payload store here
|
||||
end
|
||||
|
||||
# constructor
|
||||
function msgEnvelope_v1(
|
||||
sendTo::String,
|
||||
payloads::AbstractArray{msgPayload_v1};
|
||||
correlationId::String = "",
|
||||
msgId::String = "",
|
||||
timestamp::String = string(Dates.now()),
|
||||
msgPurpose::String = "",
|
||||
senderName::String = "",
|
||||
senderId::String = "",
|
||||
receiverName::String = "",
|
||||
receiverId::String = "",
|
||||
replyTo::String = "",
|
||||
replyToMsgId::String = "",
|
||||
brokerURL::String = DEFAULT_NATS_URL,
|
||||
metadata::Dict{String, Any} = Dict{String, Any}()
|
||||
)
|
||||
MessageEnvelope(correlation_id, type, transport, payload, url, metadata)
|
||||
end
|
||||
|
||||
""" Constructor for MessageEnvelope from JSON string
|
||||
This constructor parses a JSON string and reconstructs a MessageEnvelope struct.
|
||||
It handles the metadata field specially by converting the JSON object to a Julia Dict,
|
||||
extracting values from the JSON structure for all other fields.
|
||||
"""
|
||||
function MessageEnvelope(json_str::String)
|
||||
data = JSON.parse(json_str) # Parse JSON string into Julia data structure
|
||||
metadata = Dict{String, Any}()
|
||||
if haskey(data, :metadata) # Check if metadata exists in JSON
|
||||
metadata = Dict(String(k) => v for (k, v) in data.metadata) # Convert JSON keys to strings and store in Dict
|
||||
end
|
||||
|
||||
MessageEnvelope(
|
||||
correlation_id = String(data.correlation_id), # Extract correlation_id from JSON data
|
||||
type = String(data.type), # Extract type from JSON data
|
||||
transport = String(data.transport), # Extract transport from JSON data
|
||||
payload = haskey(data, :payload) ? String(data.payload) : nothing, # Extract payload if present
|
||||
url = haskey(data, :url) ? String(data.url) : nothing, # Extract URL if present
|
||||
metadata = metadata # Use the parsed metadata
|
||||
return msgEnvelope_v1(
|
||||
correlationId,
|
||||
msgId,
|
||||
timestamp,
|
||||
sendTo,
|
||||
msgPurpose,
|
||||
senderName,
|
||||
senderId,
|
||||
receiverName,
|
||||
receiverId,
|
||||
replyTo,
|
||||
replyToMsgId,
|
||||
brokerURL,
|
||||
metadata,
|
||||
payloads
|
||||
)
|
||||
end
|
||||
|
||||
|
||||
""" Convert MessageEnvelope to JSON string
|
||||
This function converts the MessageEnvelope struct to a JSON string representation.
|
||||
It only includes fields in the JSON output if they have non-nothing values,
|
||||
making the JSON output cleaner and more efficient.
|
||||
|
||||
""" Convert msgEnvelope_v1 to JSON string
|
||||
This function converts the msgEnvelope_v1 struct to a JSON string representation.
|
||||
"""
|
||||
function envelope_to_json(env::MessageEnvelope)
|
||||
function envelope_to_json(env::msgEnvelope_v1)
|
||||
obj = Dict{String, Any}(
|
||||
"correlation_id" => env.correlation_id, # Always include correlation_id
|
||||
"type" => env.type, # Always include type
|
||||
"transport" => env.transport # Always include transport
|
||||
"correlationId" => env.correlationId,
|
||||
"msgId" => env.msgId,
|
||||
"timestamp" => env.timestamp,
|
||||
"sendTo" => env.sendTo,
|
||||
"msgPurpose" => env.msgPurpose,
|
||||
"senderName" => env.senderName,
|
||||
"senderId" => env.senderId,
|
||||
"receiverName" => env.receiverName,
|
||||
"receiverId" => env.receiverId,
|
||||
"replyTo" => env.replyTo,
|
||||
"replyToMsgId" => env.replyToMsgId,
|
||||
"brokerURL" => env.brokerURL
|
||||
)
|
||||
|
||||
if env.payload !== nothing # Only include payload if it exists
|
||||
obj["payload"] = env.payload
|
||||
end
|
||||
|
||||
if env.url !== nothing # Only include URL if it exists
|
||||
obj["url"] = env.url
|
||||
end
|
||||
|
||||
if !isempty(env.metadata) # Only include metadata if it exists and is not empty
|
||||
obj["metadata"] = env.metadata
|
||||
obj["metadata"] = Dict(String(k) => v for (k, v) in env.metadata)
|
||||
end
|
||||
|
||||
JSON.json(obj) # Convert Dict to JSON string
|
||||
# Convert payloads to JSON array
|
||||
if !isempty(env.payloads)
|
||||
payloads_json = []
|
||||
for payload in env.payloads
|
||||
payload_obj = Dict{String, Any}(
|
||||
"id" => payload.id,
|
||||
"dataname" => payload.dataname,
|
||||
"type" => payload.type,
|
||||
"transport" => payload.transport,
|
||||
"encoding" => payload.encoding,
|
||||
"size" => payload.size,
|
||||
)
|
||||
# Include data based on transport type
|
||||
if payload.transport == "direct" && payload.data !== nothing
|
||||
if payload.encoding == "base64" || payload.encoding == "json"
|
||||
payload_obj["data"] = payload.data
|
||||
else
|
||||
# For other encodings, use base64
|
||||
payload_bytes = _get_payload_bytes(payload.data)
|
||||
payload_obj["data"] = Base64.base64encode(payload_bytes)
|
||||
end
|
||||
elseif payload.transport == "link" && payload.data !== nothing
|
||||
# For link transport, data is a URL string - include directly
|
||||
payload_obj["data"] = payload.data
|
||||
end
|
||||
if !isempty(payload.metadata)
|
||||
payload_obj["metadata"] = Dict(String(k) => v for (k, v) in payload.metadata)
|
||||
end
|
||||
push!(payloads_json, payload_obj)
|
||||
end
|
||||
obj["payloads"] = payloads_json
|
||||
end
|
||||
|
||||
JSON.json(obj)
|
||||
end
|
||||
|
||||
|
||||
@@ -112,73 +213,96 @@ This function intelligently routes data delivery based on payload size relative
|
||||
If the serialized payload is smaller than `size_threshold`, it encodes the data as Base64 and publishes directly over NATS.
|
||||
Otherwise, it uploads the data to a fileserver (by default using `plik_oneshot_upload`) and publishes only the download URL over NATS.
|
||||
|
||||
The function accepts a list of (dataname, data, type) tuples as input and processes each payload individually.
|
||||
Each payload can have a different type, enabling mixed-content messages (e.g., chat with text, images, audio).
|
||||
|
||||
The function workflow:
|
||||
1. Serializes the provided data according to the specified format (`type`)
|
||||
2. Compares the serialized size against `size_threshold`
|
||||
3. For small payloads: encodes as Base64, constructs a "direct" MessageEnvelope, and publishes to NATS
|
||||
4. For large payloads: uploads to the fileserver, constructs a "link" MessageEnvelope with the URL, and publishes to NATS
|
||||
1. Iterates through the list of (dataname, data, type) tuples
|
||||
2. For each payload: extracts the type from the tuple and serializes accordingly
|
||||
3. Compares the serialized size against `size_threshold`
|
||||
4. For small payloads: encodes as Base64, constructs a "direct" msgPayload_v1
|
||||
5. For large payloads: uploads to the fileserver, constructs a "link" msgPayload_v1 with the URL
|
||||
|
||||
# Arguments:
|
||||
- `subject::String` - NATS subject to publish the message to
|
||||
- `data::Any` - Data payload to send (any Julia object)
|
||||
- `type::String = "json"` - Serialization format: `"json"` or `"arrow"`
|
||||
- `data::AbstractArray{Tuple{String, Any, String}}` - List of (dataname, data, type) tuples to send
|
||||
- `dataname::String` - Name of the payload
|
||||
- `data::Any` - The actual data to send
|
||||
- `type::String` - Payload type: "text", "dictionary", "table", "image", "audio", "video", "binary"
|
||||
- No standalone `type` parameter - type is specified per payload
|
||||
|
||||
# Keyword Arguments:
|
||||
- `dataname::String = string(UUIDs.uuid4())` - Filename to use when uploading to fileserver (auto-generated UUID if not provided)
|
||||
- `nats_url::String = DEFAULT_NATS_URL` - URL of the NATS server
|
||||
- `fileserver_url::String = DEFAULT_FILESERVER_URL` - Base URL of the fileserver (e.g., `"http://localhost:8080"`)
|
||||
- `fileServerUploadHandler::Function = plik_oneshot_upload` - Function to handle fileserver uploads (must match signature of `plik_oneshot_upload`)
|
||||
- `fileserverUploadHandler::Function = plik_oneshot_upload` - Function to handle fileserver uploads (must return Dict with "status", "uploadid", "fileid", "url" keys)
|
||||
- `size_threshold::Int = DEFAULT_SIZE_THRESHOLD` - Threshold in bytes separating direct vs link transport
|
||||
- `correlation_id::Union{String, Nothing} = nothing` - Optional correlation ID for tracing; if `nothing`, a UUID is generated
|
||||
- `msg_purpose::String = "chat"` - Purpose of the message: "ACK", "NACK", "updateStatus", "shutdown", "chat", etc.
|
||||
- `sender_name::String = "NATSBridge"` - Name of the sender
|
||||
- `receiver_name::String = ""` - Name of the receiver (empty string means broadcast)
|
||||
- `receiver_id::String = ""` - UUID of the receiver (empty string means broadcast)
|
||||
- `reply_to::String = ""` - Topic to reply to (empty string if no reply expected)
|
||||
- `reply_to_msg_id::String = ""` - Message ID this message is replying to
|
||||
|
||||
# Return:
|
||||
- A `MessageEnvelope` object containing metadata and transport information:
|
||||
- `correlation_id::String` - Unique identifier for this message exchange
|
||||
- `type::String` - Serialization type used (`"json"` or `"arrow"`)
|
||||
- `transport::String` - Either `"direct"` or `"link"`
|
||||
- `payload::Union{String, Nothing}` - Base64-encoded data for direct transport, `nothing` for link transport
|
||||
- `url::Union{String, Nothing}` - Download URL for link transport, `nothing` for direct transport
|
||||
- `metadata::Dict` - Additional metadata (e.g., `"content_length"`, `"format"`)
|
||||
- A `msgEnvelope_v1` object containing metadata and transport information
|
||||
|
||||
# Example
|
||||
```julia
|
||||
using UUIDs
|
||||
|
||||
# Send a small struct directly via NATS
|
||||
# Send a single payload (still wrapped in a list)
|
||||
data = Dict("key" => "value")
|
||||
env = smartsend("my.subject", data, "json")
|
||||
env = smartsend("my.subject", [("dataname1", data, "dictionary")])
|
||||
|
||||
# Send multiple payloads in one message with different types
|
||||
data1 = Dict("key1" => "value1")
|
||||
data2 = rand(10_000) # Small array
|
||||
env = smartsend("my.subject", [("dataname1", data1, "dictionary"), ("dataname2", data2, "table")])
|
||||
|
||||
# Send a large array using fileserver upload
|
||||
data = rand(10_000_000) # ~80 MB
|
||||
env = smartsend("large.data", data, "arrow")
|
||||
env = smartsend("large.data", [("large_table", data, "table")])
|
||||
|
||||
# In another process, retrieve and deserialize:
|
||||
# msg = subscribe(nats_url, "my.subject")
|
||||
# env = json_to_envelope(msg.data)
|
||||
# data = _deserialize_data(Base64.decode(env.payload), env.type)
|
||||
# Mixed content (e.g., chat with text and image)
|
||||
env = smartsend("chat.subject", [
|
||||
("message_text", "Hello!", "text"),
|
||||
("user_image", image_data, "image"),
|
||||
("audio_clip", audio_data, "audio")
|
||||
])
|
||||
```
|
||||
"""
|
||||
function smartsend(
|
||||
subject::String, # smartreceive's subject
|
||||
data::Any,
|
||||
type::String = "json";
|
||||
dataname="NA",
|
||||
data::AbstractArray{Tuple{String, T1, String}, 1}; # List of (dataname, data, type) tuples
|
||||
nats_url::String = DEFAULT_NATS_URL,
|
||||
fileserver_url::String = DEFAULT_FILESERVER_URL,
|
||||
fileServerUploadHandler::Function=plik_oneshot_upload, # a function to handle uploading data to specific HTTP fileserver
|
||||
fileserver_url = DEFAULT_FILESERVER_URL,
|
||||
fileserverUploadHandler::Function=plik_oneshot_upload, # a function to handle uploading data to specific HTTP fileserver
|
||||
size_threshold::Int = DEFAULT_SIZE_THRESHOLD,
|
||||
correlation_id::Union{String, Nothing} = nothing
|
||||
)
|
||||
correlation_id::Union{String, Nothing} = nothing,
|
||||
msg_purpose::String = "chat",
|
||||
sender_name::String = "NATSBridge",
|
||||
receiver_name::String = "",
|
||||
receiver_id::String = "",
|
||||
reply_to::String = "",
|
||||
reply_to_msg_id::String = ""
|
||||
) where {T1<:Any}
|
||||
|
||||
# Generate correlation ID if not provided
|
||||
cid = correlation_id !== nothing ? correlation_id : string(uuid4()) # Create or use provided correlation ID
|
||||
|
||||
log_trace(cid, "Starting smartsend for subject: $subject") # Log start of send operation
|
||||
|
||||
# Generate message metadata
|
||||
msg_id = string(uuid4())
|
||||
|
||||
# Process each payload in the list
|
||||
payloads = msgPayload_v1[]
|
||||
for (dataname, payload_data, payload_type) in data
|
||||
# Serialize data based on type
|
||||
payload_bytes = _serialize_data(data, type) # Convert data to bytes based on type
|
||||
payload_bytes = _serialize_data(payload_data, payload_type)
|
||||
|
||||
payload_size = length(payload_bytes) # Calculate payload size in bytes
|
||||
log_trace(cid, "Serialized payload size: $payload_size bytes") # Log payload size
|
||||
log_trace(cid, "Serialized payload '$dataname' (type: $payload_type) size: $payload_size bytes") # Log payload size
|
||||
|
||||
# Decision: Direct vs Link
|
||||
if payload_size < size_threshold # Check if payload is small enough for direct transport
|
||||
@@ -186,87 +310,127 @@ function smartsend(
|
||||
payload_b64 = Base64.base64encode(payload_bytes) # Encode bytes as base64 string
|
||||
log_trace(cid, "Using direct transport for $payload_size bytes") # Log transport choice
|
||||
|
||||
env = MessageEnvelope( # Create envelope for direct transport
|
||||
correlation_id = cid,
|
||||
type = type,
|
||||
# Create msgPayload_v1 for direct transport
|
||||
payload = msgPayload_v1(
|
||||
payload_b64,
|
||||
payload_type;
|
||||
id = string(uuid4()),
|
||||
dataname = dataname,
|
||||
transport = "direct",
|
||||
payload = payload_b64,
|
||||
metadata = Dict("dataname" => dataname, "content_length" => payload_size, "format" => "arrow_ipc_stream")
|
||||
encoding = "base64",
|
||||
size = payload_size,
|
||||
metadata = Dict{String, Any}("payload_bytes" => payload_size)
|
||||
)
|
||||
|
||||
msg_json = envelope_to_json(env) # Convert envelope to JSON
|
||||
publish_message(nats_url, subject, msg_json, cid) # Publish message to NATS
|
||||
|
||||
return env # Return the envelope for tracking
|
||||
push!(payloads, payload)
|
||||
else
|
||||
# Link path - Upload to HTTP server, send URL via NATS
|
||||
log_trace(cid, "Using link transport, uploading to fileserver") # Log link transport choice
|
||||
|
||||
# Upload to HTTP server
|
||||
response = fileServerUploadHandler(fileserver_url, dataname, payload_bytes)
|
||||
response = fileserverUploadHandler(fileserver_url, dataname, payload_bytes)
|
||||
|
||||
if response[:status] != 200 # Check if upload was successful
|
||||
error("Failed to upload data to fileserver: $(response[:status])") # Throw error if upload failed
|
||||
if response["status"] != 200 # Check if upload was successful
|
||||
error("Failed to upload data to fileserver: $(response["status"])") # Throw error if upload failed
|
||||
end
|
||||
|
||||
url = response[:url] # URL for the uploaded data
|
||||
url = response["url"] # URL for the uploaded data
|
||||
log_trace(cid, "Uploaded to URL: $url") # Log successful upload
|
||||
|
||||
env = MessageEnvelope( # Create envelope for link transport
|
||||
correlation_id = cid,
|
||||
type = type,
|
||||
# Create msgPayload_v1 for link transport
|
||||
payload = msgPayload_v1(
|
||||
url,
|
||||
payload_type;
|
||||
id = string(uuid4()),
|
||||
dataname = dataname,
|
||||
transport = "link",
|
||||
url = url,
|
||||
metadata = Dict("dataname" => dataname, "content_length" => payload_size, "format" => "arrow_ipc_stream")
|
||||
encoding = "none",
|
||||
size = payload_size,
|
||||
metadata = Dict{String, Any}()
|
||||
)
|
||||
push!(payloads, payload)
|
||||
end
|
||||
end
|
||||
|
||||
# Create msgEnvelope_v1 with all payloads
|
||||
env = msgEnvelope_v1(
|
||||
subject,
|
||||
payloads;
|
||||
correlationId = cid,
|
||||
msgId = msg_id,
|
||||
msgPurpose = msg_purpose,
|
||||
senderName = sender_name,
|
||||
senderId = string(uuid4()),
|
||||
receiverName = receiver_name,
|
||||
receiverId = receiver_id,
|
||||
replyTo = reply_to,
|
||||
replyToMsgId = reply_to_msg_id,
|
||||
brokerURL = nats_url,
|
||||
metadata = Dict{String, Any}(),
|
||||
)
|
||||
|
||||
msg_json = envelope_to_json(env) # Convert envelope to JSON
|
||||
publish_message(nats_url, subject, msg_json, cid) # Publish message to NATS
|
||||
|
||||
return env # Return the envelope for tracking
|
||||
end
|
||||
end
|
||||
|
||||
|
||||
""" _serialize_data - Serialize data according to specified format
|
||||
|
||||
This function serializes arbitrary Julia data into a binary representation based on the specified format.
|
||||
It supports three serialization formats:
|
||||
- `"json"`: Serializes data as JSON and returns the UTF-8 byte representation
|
||||
It supports multiple serialization formats:
|
||||
- `"text"`: Treats data as text and converts to UTF-8 bytes
|
||||
- `"dictionary"`: Serializes data as JSON and returns the UTF-8 byte representation
|
||||
- `"table"`: Serializes data as an Arrow IPC stream (table format) and returns the byte stream
|
||||
- `"binary"`: Expects already-binary data (either `IOBuffer` or `Vector{UInt8}`) and returns it as bytes
|
||||
- `"image"`: Expects binary image data (Vector{UInt8}) and returns it as bytes
|
||||
- `"audio"`: Expects binary audio data (Vector{UInt8}) and returns it as bytes
|
||||
- `"video"`: Expects binary video data (Vector{UInt8}) and returns it as bytes
|
||||
- `"binary"`: Generic binary data (Vector{UInt8} or IOBuffer) and returns bytes
|
||||
|
||||
The function handles format-specific serialization logic:
|
||||
1. For `"json"`: Converts Julia data to JSON string, then encodes to bytes
|
||||
2. For `"table"`: Uses Arrow.jl to write data as an Arrow IPC stream to an in-memory buffer
|
||||
3. For `"binary"`: Extracts bytes from `IOBuffer` or returns `Vector{UInt8}` directly
|
||||
1. For `"text"`: Converts string to UTF-8 bytes
|
||||
2. For `"dictionary"`: Converts Julia data to JSON string, then encodes to bytes
|
||||
3. For `"table"`: Uses Arrow.jl to write data as an Arrow IPC stream to an in-memory buffer
|
||||
4. For `"image"`, `"audio"`, `"video"`: Treats data as binary (Vector{UInt8})
|
||||
5. For `"binary"`: Extracts bytes from `IOBuffer` or returns `Vector{UInt8}` directly
|
||||
|
||||
# Arguments:
|
||||
- `data::Any` - Data to serialize (JSON-serializable for `"json"`, table-like for `"table"`, binary for `"binary"`)
|
||||
- `type::String` - Target format: `"json"`, `"table"`, or `"binary"`
|
||||
- `data::Any` - Data to serialize (string for `"text"`, JSON-serializable for `"dictionary"`, table-like for `"table"`, binary for `"image"`, `"audio"`, `"video"`, `"binary"`)
|
||||
|
||||
# Return:
|
||||
- `Vector{UInt8}` - Binary representation of the serialized data
|
||||
|
||||
# Throws:
|
||||
- `Error` if `type` is not one of `"json"`, `"table"`, or `"binary"`
|
||||
- `Error` if `type == "binary"` but `data` is neither `IOBuffer` nor `Vector{UInt8}`
|
||||
- `Error` if `type` is not one of the supported types
|
||||
- `Error` if `type` is `"image"`, `"audio"`, or `"video"` but `data` is not `Vector{UInt8}`
|
||||
|
||||
# Example
|
||||
```julia
|
||||
using JSON, Arrow, DataFrames
|
||||
|
||||
# Text serialization
|
||||
text_data = "Hello, World!"
|
||||
text_bytes = _serialize_data(text_data, "text")
|
||||
|
||||
# JSON serialization
|
||||
json_data = Dict("name" => "Alice", "age" => 30)
|
||||
json_bytes = _serialize_data(json_data, "json")
|
||||
json_bytes = _serialize_data(json_data, "dictionary")
|
||||
|
||||
# Table serialization with a DataFrame (recommended for tabular data)
|
||||
df = DataFrame(id = 1:3, name = ["Alice", "Bob", "Charlie"], score = [95, 88, 92])
|
||||
table_bytes = _serialize_data(df, "table")
|
||||
|
||||
# Table serialization with named tuple of vectors (also supported)
|
||||
nt = (id = [1, 2, 3], name = ["Alice", "Bob", "Charlie"], score = [95, 88, 92])
|
||||
table_bytes_nt = _serialize_data(nt, "table")
|
||||
# Image data (Vector{UInt8})
|
||||
image_bytes = UInt8[1, 2, 3] # Image bytes
|
||||
image_serialized = _serialize_data(image_bytes, "image")
|
||||
|
||||
# Audio data (Vector{UInt8})
|
||||
audio_bytes = UInt8[1, 2, 3] # Audio bytes
|
||||
audio_serialized = _serialize_data(audio_bytes, "audio")
|
||||
|
||||
# Video data (Vector{UInt8})
|
||||
video_bytes = UInt8[1, 2, 3] # Video bytes
|
||||
video_serialized = _serialize_data(video_bytes, "video")
|
||||
|
||||
# Binary data (IOBuffer)
|
||||
buf = IOBuffer()
|
||||
@@ -278,13 +442,55 @@ binary_bytes_direct = _serialize_data(UInt8[1, 2, 3], "binary")
|
||||
```
|
||||
"""
|
||||
function _serialize_data(data::Any, type::String)
|
||||
if type == "json" # JSON data - serialize directly
|
||||
""" Example on how JSON.jl convert: dictionary -> json string -> json string bytes -> json string -> json object
|
||||
d = Dict(
|
||||
"name"=>"ton",
|
||||
"age"=> 20,
|
||||
"metadata" => Dict(
|
||||
"height"=> 155,
|
||||
"wife"=> "jane"
|
||||
)
|
||||
)
|
||||
|
||||
json_str = JSON.json(d)
|
||||
json_str_bytes = Vector{UInt8}(json_str)
|
||||
json_str_2 = String(json_str_bytes)
|
||||
json_obj = JSON.parse(json_str_2)
|
||||
"""
|
||||
|
||||
if type == "text" # Text data - convert to UTF-8 bytes
|
||||
if isa(data, String)
|
||||
data_bytes = Vector{UInt8}(data) # Convert string to UTF-8 bytes
|
||||
return data_bytes
|
||||
else
|
||||
error("Text data must be a String")
|
||||
end
|
||||
elseif type == "dictionary" # JSON data - serialize directly
|
||||
json_str = JSON.json(data) # Convert Julia data to JSON string
|
||||
return bytes(json_str) # Convert JSON string to bytes
|
||||
json_str_bytes = Vector{UInt8}(json_str) # Convert JSON string to bytes
|
||||
return json_str_bytes
|
||||
elseif type == "table" # Table data - convert to Arrow IPC stream
|
||||
io = IOBuffer() # Create in-memory buffer
|
||||
Arrow.write(io, data) # Write data as Arrow IPC stream to buffer
|
||||
return take!(io) # Return the buffer contents as bytes
|
||||
elseif type == "image" # Image data - treat as binary
|
||||
if isa(data, Vector{UInt8})
|
||||
return data # Return binary data directly
|
||||
else
|
||||
error("Image data must be Vector{UInt8}")
|
||||
end
|
||||
elseif type == "audio" # Audio data - treat as binary
|
||||
if isa(data, Vector{UInt8})
|
||||
return data # Return binary data directly
|
||||
else
|
||||
error("Audio data must be Vector{UInt8}")
|
||||
end
|
||||
elseif type == "video" # Video data - treat as binary
|
||||
if isa(data, Vector{UInt8})
|
||||
return data # Return binary data directly
|
||||
else
|
||||
error("Video data must be Vector{UInt8}")
|
||||
end
|
||||
elseif type == "binary" # Binary data - treat as binary
|
||||
if isa(data, IOBuffer) # Check if data is an IOBuffer
|
||||
return take!(data) # Return buffer contents as bytes
|
||||
@@ -324,55 +530,83 @@ end
|
||||
This function processes incoming NATS messages, handling both direct transport
|
||||
(base64 decoded payloads) and link transport (URL-based payloads).
|
||||
It deserializes the data based on the transport type and returns the result.
|
||||
A HTTP file server is required along with its upload function.
|
||||
A HTTP file server is required along with its download function.
|
||||
|
||||
Arguments:
|
||||
- `msg::NATS.Message` - NATS message to process
|
||||
- `msg::NATS.Msg` - NATS message to process
|
||||
- `fileserverDownloadHandler::Function` - Function to handle downloading data from file server URLs
|
||||
|
||||
Keyword Arguments:
|
||||
- `fileserver_url::String` - HTTP file server URL for link transport (default: DEFAULT_FILESERVER_URL)
|
||||
- `max_retries::Int` - Maximum retry attempts for fetching URL (default: 5)
|
||||
- `base_delay::Int` - Initial delay for exponential backoff in ms (default: 100)
|
||||
- `max_delay::Int` - Maximum delay for exponential backoff in ms (default: 5000)
|
||||
|
||||
Return:
|
||||
- Tuple `(data = deserialized_data, envelope = MessageEnvelope)` - Data and envelope
|
||||
- `AbstractArray{Tuple{String, Any, String}}` - List of (dataname, data, type) tuples
|
||||
|
||||
# Example
|
||||
```julia
|
||||
# Receive and process message
|
||||
msg = nats_message # NATS message
|
||||
payloads = smartreceive(msg; fileserverDownloadHandler=_fetch_with_backoff, max_retries=5, base_delay=100, max_delay=5000)
|
||||
# payloads = [("dataname1", data1, "type1"), ("dataname2", data2, "type2"), ...]
|
||||
```
|
||||
"""
|
||||
function smartreceive(
|
||||
msg::NATS.Msg;
|
||||
fileserver_url::String = DEFAULT_FILESERVER_URL,
|
||||
fileserverDownloadHandler::Function=_fetch_with_backoff,
|
||||
max_retries::Int = 5,
|
||||
base_delay::Int = 100,
|
||||
max_delay::Int = 5000
|
||||
)
|
||||
# Parse the envelope
|
||||
env = MessageEnvelope(String(msg.payload)) # Parse NATS message data as JSON envelope
|
||||
log_trace(env.correlation_id, "Processing received message") # Log message processing start
|
||||
# Parse the JSON envelope
|
||||
json_data = JSON.parse(String(msg.payload))
|
||||
log_trace(json_data["correlationId"], "Processing received message") # Log message processing start
|
||||
|
||||
# Check transport type
|
||||
if env.transport == "direct" # Direct transport - payload is in the message
|
||||
log_trace(env.correlation_id, "Direct transport - decoding payload") # Log direct transport handling
|
||||
# Process all payloads in the envelope
|
||||
payloads_list = Tuple{String, Any, String}[]
|
||||
|
||||
# Get number of payloads
|
||||
num_payloads = length(json_data["payloads"])
|
||||
|
||||
for i in 1:num_payloads
|
||||
payload = json_data["payloads"][i]
|
||||
transport = String(payload["transport"])
|
||||
dataname = String(payload["dataname"])
|
||||
|
||||
if transport == "direct" # Direct transport - payload is in the message
|
||||
log_trace(json_data["correlationId"], "Direct transport - decoding payload '$dataname'") # Log direct transport handling
|
||||
|
||||
# Extract base64 payload from the payload
|
||||
payload_b64 = String(payload["data"])
|
||||
|
||||
# Decode Base64 payload
|
||||
payload_bytes = Base64.base64decode(env.payload) # Decode base64 payload to bytes
|
||||
payload_bytes = Base64.base64decode(payload_b64) # Decode base64 payload to bytes
|
||||
|
||||
# Deserialize based on type
|
||||
data = _deserialize_data(payload_bytes, env.type, env.correlation_id, env.metadata) # Convert bytes to Julia data
|
||||
data_type = String(payload["type"])
|
||||
data = _deserialize_data(payload_bytes, data_type, json_data["correlationId"])
|
||||
|
||||
return (data = data, envelope = env) # Return data and envelope as tuple
|
||||
elseif env.transport == "link" # Link transport - payload is at URL
|
||||
log_trace(env.correlation_id, "Link transport - fetching from URL") # Log link transport handling
|
||||
push!(payloads_list, (dataname, data, data_type))
|
||||
elseif transport == "link" # Link transport - payload is at URL
|
||||
# Extract download URL from the payload
|
||||
url = String(payload["data"])
|
||||
log_trace(json_data["correlationId"], "Link transport - fetching '$dataname' from URL: $url") # Log link transport handling
|
||||
|
||||
# Fetch with exponential backoff
|
||||
downloaded_data = _fetch_with_backoff(env.url, max_retries, base_delay, max_delay, env.correlation_id) # Fetch data from URL
|
||||
# Fetch with exponential backoff using the download handler
|
||||
downloaded_data = fileserverDownloadHandler(url, max_retries, base_delay, max_delay, json_data["correlationId"])
|
||||
|
||||
# Deserialize based on type
|
||||
data = _deserialize_data(downloaded_data, env.type, env.correlation_id, env.metadata) # Convert bytes to Julia data
|
||||
data_type = String(payload["type"])
|
||||
data = _deserialize_data(downloaded_data, data_type, json_data["correlationId"])
|
||||
|
||||
return (data = data, envelope = env) # Return data and envelope as tuple
|
||||
push!(payloads_list, (dataname, data, data_type))
|
||||
else # Unknown transport type
|
||||
error("Unknown transport type: $(env.transport)") # Throw error for unknown transport
|
||||
error("Unknown transport type for payload '$dataname': $(transport)") # Throw error for unknown transport
|
||||
end
|
||||
end
|
||||
|
||||
return payloads_list # Return list of (dataname, data, data_type) tuples
|
||||
end
|
||||
|
||||
|
||||
@@ -423,31 +657,37 @@ end
|
||||
|
||||
""" Deserialize bytes to data based on type
|
||||
This internal function converts serialized bytes back to Julia data based on type.
|
||||
It handles "json" (JSON deserialization), "table" (Arrow IPC deserialization),
|
||||
and "binary" (binary data).
|
||||
It handles "text" (string), "dictionary" (JSON deserialization), "table" (Arrow IPC deserialization),
|
||||
"image" (binary data), "audio" (binary data), "video" (binary data), and "binary" (binary data).
|
||||
|
||||
Arguments:
|
||||
- `data::Vector{UInt8}` - Serialized data as bytes
|
||||
- `type::String` - Data type ("json", "table", "binary")
|
||||
- `type::String` - Data type ("text", "dictionary", "table", "image", "audio", "video", "binary")
|
||||
- `correlation_id::String` - Correlation ID for logging
|
||||
- `metadata::Dict{String, Any}` - Metadata about the data
|
||||
|
||||
Return:
|
||||
- Deserialized data (DataFrame for "table", JSON data for "json", bytes for "binary")
|
||||
- Deserialized data (String for "text", DataFrame for "table", JSON data for "dictionary", bytes for "image", "audio", "video", "binary")
|
||||
"""
|
||||
function _deserialize_data(
|
||||
data::Vector{UInt8},
|
||||
type::String,
|
||||
correlation_id::String,
|
||||
metadata::Dict{String, Any}
|
||||
correlation_id::String
|
||||
)
|
||||
if type == "json" # JSON data - deserialize
|
||||
if type == "text" # Text data - convert to string
|
||||
return String(data) # Convert bytes to string
|
||||
elseif type == "dictionary" # JSON data - deserialize
|
||||
json_str = String(data) # Convert bytes to string
|
||||
return JSON.parse(json_str) # Parse JSON string to Julia data structure
|
||||
return JSON.parse(json_str) # Parse JSON string to JSON object
|
||||
elseif type == "table" # Table data - deserialize Arrow IPC stream
|
||||
io = IOBuffer(data) # Create buffer from bytes
|
||||
df = Arrow.Table(io) # Read Arrow IPC format from buffer
|
||||
return df # Return DataFrame
|
||||
elseif type == "image" # Image data - return binary
|
||||
return data # Return bytes directly
|
||||
elseif type == "audio" # Audio data - return binary
|
||||
return data # Return bytes directly
|
||||
elseif type == "video" # Video data - return binary
|
||||
return data # Return bytes directly
|
||||
elseif type == "binary" # Binary data - return binary
|
||||
return data # Return bytes directly
|
||||
else # Unknown type
|
||||
@@ -456,21 +696,6 @@ function _deserialize_data(
|
||||
end
|
||||
|
||||
|
||||
# """ Decode base64 string to bytes
|
||||
# This internal function decodes a base64-encoded string back to binary data.
|
||||
# It's a wrapper around Base64.decode for consistency in the module.
|
||||
|
||||
# Arguments:
|
||||
# - `str::String` - Base64-encoded string to decode
|
||||
|
||||
# Return:
|
||||
# - Vector{UInt8} - Decoded binary data
|
||||
# """
|
||||
# function base64decode(str::String)
|
||||
# return Base64.decode(str) # Decode base64 string to bytes using Julia's Base64 module
|
||||
# end
|
||||
|
||||
|
||||
""" plik_oneshot_upload - Upload a single file to a plik server using one-shot mode
|
||||
|
||||
This function uploads a raw byte array to a plik server in one-shot mode (no upload session).
|
||||
@@ -488,28 +713,27 @@ The function workflow:
|
||||
- `data::Vector{UInt8}` - Raw byte data of the file content
|
||||
|
||||
# Return:
|
||||
- A named tuple with fields:
|
||||
- `status::Integer` - HTTP server response status
|
||||
- `uploadid::String` - ID of the one-shot upload session
|
||||
- `fileid::String` - ID of the uploaded file within the session
|
||||
- `url::String` - Full URL to download the uploaded file
|
||||
- A Dict with keys:
|
||||
- `"status"` - HTTP server response status
|
||||
- `"uploadid"` - ID of the one-shot upload session
|
||||
- `"fileid"` - ID of the uploaded file within the session
|
||||
- `"url"` - Full URL to download the uploaded file
|
||||
|
||||
# Example
|
||||
```jldoctest
|
||||
```julia
|
||||
using HTTP, JSON
|
||||
|
||||
fileServerURL = "http://localhost:8080"
|
||||
filepath = "./test.zip"
|
||||
filename = basename(filepath)
|
||||
filebytes = read(filepath) # read(filepath) output is raw bytes of the file
|
||||
filename = "test.txt"
|
||||
data = UInt8["hello world"]
|
||||
|
||||
# Upload to local plik server
|
||||
status, uploadid, fileid, url = plik_oneshot_upload(fileServerURL, filename, filebytes)
|
||||
result = plik_oneshot_upload(fileServerURL, filename, data)
|
||||
|
||||
# to download an uploaded file
|
||||
curl -L -O "url"
|
||||
# Access the result as a Dict
|
||||
# result["status"], result["uploadid"], result["fileid"], result["url"]
|
||||
```
|
||||
""" #[x]
|
||||
"""
|
||||
function plik_oneshot_upload(fileServerURL::String, filename::String, data::Vector{UInt8})
|
||||
|
||||
# ----------------------------------------- get upload id ---------------------------------------- #
|
||||
@@ -537,25 +761,20 @@ function plik_oneshot_upload(fileServerURL::String, filename::String, data::Vect
|
||||
httpResponse = nothing
|
||||
try
|
||||
httpResponse = HTTP.post(url_upload, headers, form)
|
||||
# println("Status: ", httpResponse.status)
|
||||
responseJson = JSON.parse(httpResponse.body)
|
||||
catch e
|
||||
@error "Request failed" exception=e
|
||||
end
|
||||
|
||||
fileid=responseJson["id"]
|
||||
fileid = responseJson["id"]
|
||||
|
||||
# url of the uploaded data e.g. "http://192.168.1.20:8080/file/3F62E/4AgGT/test.zip"
|
||||
url = "$fileServerURL/file/$uploadid/$fileid/$filename"
|
||||
|
||||
return (status=httpResponse.status, uploadid=uploadid, fileid=fileid, url=url)
|
||||
return Dict("status" => httpResponse.status, "uploadid" => uploadid, "fileid" => fileid, "url" => url)
|
||||
end
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
""" plik_oneshot_upload(fileServerURL::String, filepath::String)
|
||||
|
||||
Upload a single file to a plik server using one-shot mode.
|
||||
@@ -574,11 +793,11 @@ The function workflow:
|
||||
- `filepath::String` - Full path to the local file to upload
|
||||
|
||||
# Return:
|
||||
- A named tuple with fields:
|
||||
- `status::Integer` - HTTP server response status
|
||||
- `uploadid::String` - ID of the one-shot upload session
|
||||
- `fileid::String` - ID of the uploaded file within the session
|
||||
- `url::String` - Full URL to download the uploaded file
|
||||
- A Dict with keys:
|
||||
- `"status"` - HTTP server response status
|
||||
- `"uploadid"` - ID of the one-shot upload session
|
||||
- `"fileid"` - ID of the uploaded file within the session
|
||||
- `"url"` - Full URL to download the uploaded file
|
||||
|
||||
# Example
|
||||
```julia
|
||||
@@ -588,12 +807,12 @@ fileServerURL = "http://localhost:8080"
|
||||
filepath = "./test.zip"
|
||||
|
||||
# Upload to local plik server
|
||||
status, uploadid, fileid, url = plik_oneshot_upload(fileServerURL, filepath)
|
||||
result = plik_oneshot_upload(fileServerURL, filepath)
|
||||
|
||||
# To download the uploaded file later (via curl as example):
|
||||
curl -L -O "url"
|
||||
# Access the result as a Dict
|
||||
# result["status"], result["uploadid"], result["fileid"], result["url"]
|
||||
```
|
||||
""" #[x]
|
||||
"""
|
||||
function plik_oneshot_upload(fileServerURL::String, filepath::String)
|
||||
|
||||
# ----------------------------------------- get upload id ---------------------------------------- #
|
||||
@@ -607,7 +826,6 @@ function plik_oneshot_upload(fileServerURL::String, filepath::String)
|
||||
|
||||
uploadid = responseJson["id"]
|
||||
uploadtoken = responseJson["uploadToken"]
|
||||
println("uploadid = ", uploadid)
|
||||
|
||||
# ------------------------------------------ upload file ----------------------------------------- #
|
||||
# Equivalent curl command: curl -X POST --header "X-UploadToken: UPLOAD_TOKEN" -F "file=@PATH_TO_FILE" http://localhost:8080/file/UPLOAD_ID
|
||||
@@ -624,18 +842,17 @@ function plik_oneshot_upload(fileServerURL::String, filepath::String)
|
||||
httpResponse = nothing
|
||||
try
|
||||
httpResponse = HTTP.post(url_upload, headers, form)
|
||||
# println("Status: ", httpResponse.status)
|
||||
responseJson = JSON.parse(httpResponse.body)
|
||||
catch e
|
||||
@error "Request failed" exception=e
|
||||
end
|
||||
|
||||
fileid=responseJson["id"]
|
||||
fileid = responseJson["id"]
|
||||
|
||||
# url of the uploaded data e.g. "http://192.168.1.20:8080/file/3F62E/4AgGT/test.zip"
|
||||
url = "$fileServerURL/file/$uploadid/$fileid/$filename"
|
||||
|
||||
return (status=httpResponse.status, uploadid=uploadid, fileid=fileid, url=url)
|
||||
return Dict("status" => httpResponse.status, "uploadid" => uploadid, "fileid" => fileid, "url" => url)
|
||||
end
|
||||
|
||||
|
||||
@@ -649,14 +866,6 @@ end
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
@@ -1,245 +1,706 @@
|
||||
/**
|
||||
* Bi-Directional Data Bridge - JavaScript Module
|
||||
* Implements SmartSend and SmartReceive for NATS communication
|
||||
* NATSBridge.js - Bi-Directional Data Bridge for JavaScript
|
||||
* Implements smartsend and smartreceive for NATS communication
|
||||
*
|
||||
* This module provides functionality for sending and receiving data across network boundaries
|
||||
* using NATS as the message bus, with support for both direct payload transport and
|
||||
* URL-based transport for larger payloads.
|
||||
*
|
||||
* File Server Handler Architecture:
|
||||
* The system uses handler functions to abstract file server operations, allowing support
|
||||
* for different file server implementations (e.g., Plik, AWS S3, custom HTTP server).
|
||||
*
|
||||
* Handler Function Signatures:
|
||||
*
|
||||
* ```javascript
|
||||
* // Upload handler - uploads data to file server and returns URL
|
||||
* // The handler is passed to smartsend as fileserverUploadHandler parameter
|
||||
* // It receives: (fileserver_url, dataname, data)
|
||||
* // Returns: { status, uploadid, fileid, url }
|
||||
* async function fileserverUploadHandler(fileserver_url, dataname, data) { ... }
|
||||
*
|
||||
* // Download handler - fetches data from file server URL with exponential backoff
|
||||
* // The handler is passed to smartreceive as fileserverDownloadHandler parameter
|
||||
* // It receives: (url, max_retries, base_delay, max_delay, correlation_id)
|
||||
* // Returns: ArrayBuffer (the downloaded data)
|
||||
* async function fileserverDownloadHandler(url, max_retries, base_delay, max_delay, correlation_id) { ... }
|
||||
* ```
|
||||
*
|
||||
* Multi-Payload Support (Standard API):
|
||||
* The system uses a standardized list-of-tuples format for all payload operations.
|
||||
* Even when sending a single payload, the user must wrap it in a list.
|
||||
*
|
||||
* API Standard:
|
||||
* ```javascript
|
||||
* // Input format for smartsend (always a list of tuples with type info)
|
||||
* [{ dataname, data, type }, ...]
|
||||
*
|
||||
* // Output format for smartreceive (always returns a list of tuples)
|
||||
* [{ dataname, data, type }, ...]
|
||||
* ```
|
||||
*
|
||||
* Supported types: "text", "dictionary", "table", "image", "audio", "video", "binary"
|
||||
*/
|
||||
|
||||
const { v4: uuidv4 } = require('uuid');
|
||||
const { decode, encode } = require('base64-url');
|
||||
const Arrow = require('apache-arrow');
|
||||
// ---------------------------------------------- 100 --------------------------------------------- #
|
||||
|
||||
// Constants
|
||||
const DEFAULT_SIZE_THRESHOLD = 1_000_000; // 1MB
|
||||
const DEFAULT_NATS_URL = 'nats://localhost:4222';
|
||||
const DEFAULT_FILESERVER_URL = 'http://localhost:8080/upload';
|
||||
const DEFAULT_SIZE_THRESHOLD = 1_000_000; // 1MB - threshold for switching from direct to link transport
|
||||
const DEFAULT_NATS_URL = "nats://localhost:4222"; // Default NATS server URL
|
||||
const DEFAULT_FILESERVER_URL = "http://localhost:8080"; // Default HTTP file server URL for link transport
|
||||
|
||||
// Logging helper
|
||||
function logTrace(correlationId, message) {
|
||||
const timestamp = new Date().toISOString();
|
||||
console.log(`[${timestamp}] [Correlation: ${correlationId}] ${message}`);
|
||||
// Helper: Generate UUID v4
|
||||
function uuid4() {
|
||||
// Simple UUID v4 generator
|
||||
return 'xxxxxxxx-xxxx-4xxx-yxxx-xxxxxxxxxxxx'.replace(/[xy]/g, function(c) {
|
||||
var r = Math.random() * 16 | 0, v = c == 'x' ? r : (r & 0x3 | 0x8);
|
||||
return v.toString(16);
|
||||
});
|
||||
}
|
||||
|
||||
// Message Envelope Class
|
||||
class MessageEnvelope {
|
||||
constructor(options = {}) {
|
||||
this.correlation_id = options.correlation_id || uuidv4();
|
||||
this.type = options.type || 'json';
|
||||
this.transport = options.transport || 'direct';
|
||||
this.payload = options.payload || null;
|
||||
this.url = options.url || null;
|
||||
// Helper: Log with correlation ID and timestamp
|
||||
function log_trace(correlation_id, message) {
|
||||
const timestamp = new Date().toISOString();
|
||||
console.log(`[${timestamp}] [Correlation: ${correlation_id}] ${message}`);
|
||||
}
|
||||
|
||||
// Helper: Get size of data in bytes
|
||||
function getDataSize(data) {
|
||||
if (typeof data === 'string') {
|
||||
return new TextEncoder().encode(data).length;
|
||||
} else if (data instanceof ArrayBuffer || data instanceof Uint8Array) {
|
||||
return data.byteLength;
|
||||
} else if (typeof data === 'object' && data !== null) {
|
||||
// For objects, serialize to JSON and measure
|
||||
return new TextEncoder().encode(JSON.stringify(data)).length;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
// Helper: Convert ArrayBuffer to Base64 string
|
||||
function arrayBufferToBase64(buffer) {
|
||||
const bytes = new Uint8Array(buffer);
|
||||
let binary = '';
|
||||
for (let i = 0; i < bytes.length; i++) {
|
||||
binary += String.fromCharCode(bytes[i]);
|
||||
}
|
||||
return btoa(binary);
|
||||
}
|
||||
|
||||
// Helper: Convert Base64 string to ArrayBuffer
|
||||
function base64ToArrayBuffer(base64) {
|
||||
const binaryString = atob(base64);
|
||||
const len = binaryString.length;
|
||||
const bytes = new Uint8Array(len);
|
||||
for (let i = 0; i < len; i++) {
|
||||
bytes[i] = binaryString.charCodeAt(i);
|
||||
}
|
||||
return bytes.buffer;
|
||||
}
|
||||
|
||||
// Helper: Serialize data based on type
|
||||
function _serialize_data(data, type) {
|
||||
/**
|
||||
* Serialize data according to specified format
|
||||
*
|
||||
* Supported formats:
|
||||
* - "text": Treats data as text and converts to UTF-8 bytes
|
||||
* - "dictionary": Serializes data as JSON and returns the UTF-8 byte representation
|
||||
* - "table": Serializes data as an Arrow IPC stream (table format) - NOT IMPLEMENTED (requires arrow library)
|
||||
* - "image": Expects binary data (ArrayBuffer) and returns it as bytes
|
||||
* - "audio": Expects binary data (ArrayBuffer) and returns it as bytes
|
||||
* - "video": Expects binary data (ArrayBuffer) and returns it as bytes
|
||||
* - "binary": Generic binary data (ArrayBuffer or Uint8Array) and returns bytes
|
||||
*/
|
||||
if (type === "text") {
|
||||
if (typeof data === 'string') {
|
||||
return new TextEncoder().encode(data).buffer;
|
||||
} else {
|
||||
throw new Error("Text data must be a String");
|
||||
}
|
||||
} else if (type === "dictionary") {
|
||||
// JSON data - serialize directly
|
||||
const jsonStr = JSON.stringify(data);
|
||||
return new TextEncoder().encode(jsonStr).buffer;
|
||||
} else if (type === "table") {
|
||||
// Table data - convert to Arrow IPC stream (NOT IMPLEMENTED in pure JavaScript)
|
||||
// This would require the apache-arrow library
|
||||
throw new Error("Table serialization requires apache-arrow library");
|
||||
} else if (type === "image") {
|
||||
if (data instanceof ArrayBuffer || data instanceof Uint8Array) {
|
||||
return data instanceof ArrayBuffer ? data : data.buffer;
|
||||
} else {
|
||||
throw new Error("Image data must be ArrayBuffer or Uint8Array");
|
||||
}
|
||||
} else if (type === "audio") {
|
||||
if (data instanceof ArrayBuffer || data instanceof Uint8Array) {
|
||||
return data instanceof ArrayBuffer ? data : data.buffer;
|
||||
} else {
|
||||
throw new Error("Audio data must be ArrayBuffer or Uint8Array");
|
||||
}
|
||||
} else if (type === "video") {
|
||||
if (data instanceof ArrayBuffer || data instanceof Uint8Array) {
|
||||
return data instanceof ArrayBuffer ? data : data.buffer;
|
||||
} else {
|
||||
throw new Error("Video data must be ArrayBuffer or Uint8Array");
|
||||
}
|
||||
} else if (type === "binary") {
|
||||
if (data instanceof ArrayBuffer || data instanceof Uint8Array) {
|
||||
return data instanceof ArrayBuffer ? data : data.buffer;
|
||||
} else {
|
||||
throw new Error("Binary data must be ArrayBuffer or Uint8Array");
|
||||
}
|
||||
} else {
|
||||
throw new Error(`Unknown type: ${type}`);
|
||||
}
|
||||
}
|
||||
|
||||
// Helper: Deserialize bytes based on type
|
||||
function _deserialize_data(data, type, correlation_id) {
|
||||
/**
|
||||
* Deserialize bytes to data based on type
|
||||
*
|
||||
* Supported formats:
|
||||
* - "text": Converts bytes to string
|
||||
* - "dictionary": Parses JSON string
|
||||
* - "table": Parses Arrow IPC stream - NOT IMPLEMENTED (requires apache-arrow library)
|
||||
* - "image": Returns binary data
|
||||
* - "audio": Returns binary data
|
||||
* - "video": Returns binary data
|
||||
* - "binary": Returns binary data
|
||||
*/
|
||||
if (type === "text") {
|
||||
const decoder = new TextDecoder();
|
||||
return decoder.decode(new Uint8Array(data));
|
||||
} else if (type === "dictionary") {
|
||||
const decoder = new TextDecoder();
|
||||
const jsonStr = decoder.decode(new Uint8Array(data));
|
||||
return JSON.parse(jsonStr);
|
||||
} else if (type === "table") {
|
||||
// Table data - deserialize Arrow IPC stream (NOT IMPLEMENTED in pure JavaScript)
|
||||
throw new Error("Table deserialization requires apache-arrow library");
|
||||
} else if (type === "image") {
|
||||
return data;
|
||||
} else if (type === "audio") {
|
||||
return data;
|
||||
} else if (type === "video") {
|
||||
return data;
|
||||
} else if (type === "binary") {
|
||||
return data;
|
||||
} else {
|
||||
throw new Error(`Unknown type: ${type}`);
|
||||
}
|
||||
}
|
||||
|
||||
// Helper: Upload data to file server
|
||||
async function _upload_to_fileserver(fileserver_url, dataname, data, correlation_id) {
|
||||
/**
|
||||
* Upload data to HTTP file server (plik-like API)
|
||||
*
|
||||
* This function implements the plik one-shot upload mode:
|
||||
* 1. Creates a one-shot upload session by sending POST request with {"OneShot": true}
|
||||
* 2. Uploads the file data as multipart form data
|
||||
* 3. Returns identifiers and download URL for the uploaded file
|
||||
*/
|
||||
log_trace(correlation_id, `Uploading ${dataname} to fileserver: ${fileserver_url}`);
|
||||
|
||||
// Step 1: Get upload ID and token
|
||||
const url_getUploadID = `${fileserver_url}/upload`;
|
||||
const headers = {
|
||||
"Content-Type": "application/json"
|
||||
};
|
||||
const body = JSON.stringify({ OneShot: true });
|
||||
|
||||
let response = await fetch(url_getUploadID, {
|
||||
method: "POST",
|
||||
headers: headers,
|
||||
body: body
|
||||
});
|
||||
|
||||
if (!response.ok) {
|
||||
throw new Error(`Failed to get upload ID: ${response.status} ${response.statusText}`);
|
||||
}
|
||||
|
||||
const responseJson = await response.json();
|
||||
const uploadid = responseJson.id;
|
||||
const uploadtoken = responseJson.uploadToken;
|
||||
|
||||
// Step 2: Upload file data
|
||||
const url_upload = `${fileserver_url}/file/${uploadid}`;
|
||||
|
||||
// Create multipart form data
|
||||
const formData = new FormData();
|
||||
// Create a Blob from the ArrayBuffer
|
||||
const blob = new Blob([data], { type: "application/octet-stream" });
|
||||
formData.append("file", blob, dataname);
|
||||
|
||||
response = await fetch(url_upload, {
|
||||
method: "POST",
|
||||
headers: {
|
||||
"X-UploadToken": uploadtoken
|
||||
},
|
||||
body: formData
|
||||
});
|
||||
|
||||
if (!response.ok) {
|
||||
throw new Error(`Failed to upload file: ${response.status} ${response.statusText}`);
|
||||
}
|
||||
|
||||
const fileResponseJson = await response.json();
|
||||
const fileid = fileResponseJson.id;
|
||||
|
||||
// Build the download URL
|
||||
const url = `${fileserver_url}/file/${uploadid}/${fileid}/${encodeURIComponent(dataname)}`;
|
||||
|
||||
log_trace(correlation_id, `Uploaded to URL: ${url}`);
|
||||
|
||||
return {
|
||||
status: response.status,
|
||||
uploadid: uploadid,
|
||||
fileid: fileid,
|
||||
url: url
|
||||
};
|
||||
}
|
||||
|
||||
// Helper: Fetch data from URL with exponential backoff
|
||||
async function _fetch_with_backoff(url, max_retries, base_delay, max_delay, correlation_id) {
|
||||
/**
|
||||
* Fetch data from URL with retry logic using exponential backoff
|
||||
*/
|
||||
let delay = base_delay;
|
||||
|
||||
for (let attempt = 1; attempt <= max_retries; attempt++) {
|
||||
try {
|
||||
const response = await fetch(url);
|
||||
|
||||
if (response.status === 200) {
|
||||
log_trace(correlation_id, `Successfully fetched data from ${url} on attempt ${attempt}`);
|
||||
const arrayBuffer = await response.arrayBuffer();
|
||||
return arrayBuffer;
|
||||
} else {
|
||||
throw new Error(`Failed to fetch: ${response.status} ${response.statusText}`);
|
||||
}
|
||||
} catch (e) {
|
||||
log_trace(correlation_id, `Attempt ${attempt} failed: ${e.message}`);
|
||||
|
||||
if (attempt < max_retries) {
|
||||
// Sleep with exponential backoff
|
||||
await new Promise(resolve => setTimeout(resolve, delay));
|
||||
delay = Math.min(delay * 2, max_delay);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
throw new Error(`Failed to fetch data after ${max_retries} attempts`);
|
||||
}
|
||||
|
||||
// Helper: Get payload bytes from data
|
||||
function _get_payload_bytes(data) {
|
||||
if (data instanceof ArrayBuffer || data instanceof Uint8Array) {
|
||||
return data instanceof ArrayBuffer ? new Uint8Array(data) : data;
|
||||
} else if (typeof data === 'string') {
|
||||
return new TextEncoder().encode(data);
|
||||
} else {
|
||||
// For objects, serialize to JSON
|
||||
return new TextEncoder().encode(JSON.stringify(data));
|
||||
}
|
||||
}
|
||||
|
||||
// MessagePayload class
|
||||
class MessagePayload {
|
||||
/**
|
||||
* Represents a single payload in the message envelope
|
||||
*
|
||||
* @param {Object} options - Payload options
|
||||
* @param {string} options.id - ID of this payload (e.g., "uuid4")
|
||||
* @param {string} options.dataname - Name of this payload (e.g., "login_image")
|
||||
* @param {string} options.type - Payload type: "text", "dictionary", "table", "image", "audio", "video", "binary"
|
||||
* @param {string} options.transport - "direct" or "link"
|
||||
* @param {string} options.encoding - "none", "json", "base64", "arrow-ipc"
|
||||
* @param {number} options.size - Data size in bytes
|
||||
* @param {string|ArrayBuffer} options.data - Payload data (direct) or URL (link)
|
||||
* @param {Object} options.metadata - Metadata for this payload
|
||||
*/
|
||||
constructor(options) {
|
||||
this.id = options.id || uuid4();
|
||||
this.dataname = options.dataname;
|
||||
this.type = options.type;
|
||||
this.transport = options.transport;
|
||||
this.encoding = options.encoding;
|
||||
this.size = options.size;
|
||||
this.data = options.data;
|
||||
this.metadata = options.metadata || {};
|
||||
}
|
||||
|
||||
static fromJSON(jsonStr) {
|
||||
const data = JSON.parse(jsonStr);
|
||||
return new MessageEnvelope({
|
||||
correlation_id: data.correlation_id,
|
||||
type: data.type,
|
||||
transport: data.transport,
|
||||
payload: data.payload || null,
|
||||
url: data.url || null,
|
||||
metadata: data.metadata || {}
|
||||
});
|
||||
}
|
||||
|
||||
// Convert to JSON object
|
||||
toJSON() {
|
||||
const obj = {
|
||||
correlation_id: this.correlation_id,
|
||||
id: this.id,
|
||||
dataname: this.dataname,
|
||||
type: this.type,
|
||||
transport: this.transport
|
||||
transport: this.transport,
|
||||
encoding: this.encoding,
|
||||
size: this.size
|
||||
};
|
||||
|
||||
if (this.payload) {
|
||||
obj.payload = this.payload;
|
||||
// Include data based on transport type
|
||||
if (this.transport === "direct" && this.data !== null) {
|
||||
if (this.encoding === "base64" || this.encoding === "json") {
|
||||
obj.data = this.data;
|
||||
} else {
|
||||
// For other encodings, use base64
|
||||
const payloadBytes = _get_payload_bytes(this.data);
|
||||
obj.data = arrayBufferToBase64(payloadBytes);
|
||||
}
|
||||
|
||||
if (this.url) {
|
||||
obj.url = this.url;
|
||||
} else if (this.transport === "link" && this.data !== null) {
|
||||
// For link transport, data is a URL string
|
||||
obj.data = this.data;
|
||||
}
|
||||
|
||||
if (Object.keys(this.metadata).length > 0) {
|
||||
obj.metadata = this.metadata;
|
||||
}
|
||||
|
||||
return JSON.stringify(obj);
|
||||
return obj;
|
||||
}
|
||||
}
|
||||
|
||||
// SmartSend for JavaScript - Handles transport selection based on payload size
|
||||
async function SmartSend(subject, data, type = 'json', options = {}) {
|
||||
// MessageEnvelope class
|
||||
class MessageEnvelope {
|
||||
/**
|
||||
* Represents the message envelope containing metadata and payloads
|
||||
*
|
||||
* @param {Object} options - Envelope options
|
||||
* @param {string} options.sendTo - Topic/subject the sender sends to
|
||||
* @param {Array<MessagePayload>} options.payloads - Array of payloads
|
||||
* @param {string} options.correlationId - Unique identifier to track messages
|
||||
* @param {string} options.msgId - This message id
|
||||
* @param {string} options.timestamp - Message published timestamp
|
||||
* @param {string} options.msgPurpose - Purpose of this message
|
||||
* @param {string} options.senderName - Name of the sender
|
||||
* @param {string} options.senderId - UUID of the sender
|
||||
* @param {string} options.receiverName - Name of the receiver
|
||||
* @param {string} options.receiverId - UUID of the receiver
|
||||
* @param {string} options.replyTo - Topic to reply to
|
||||
* @param {string} options.replyToMsgId - Message id this message is replying to
|
||||
* @param {string} options.brokerURL - NATS server address
|
||||
* @param {Object} options.metadata - Metadata for the envelope
|
||||
*/
|
||||
constructor(options) {
|
||||
this.correlationId = options.correlationId || uuid4();
|
||||
this.msgId = options.msgId || uuid4();
|
||||
this.timestamp = options.timestamp || new Date().toISOString();
|
||||
this.sendTo = options.sendTo;
|
||||
this.msgPurpose = options.msgPurpose || "";
|
||||
this.senderName = options.senderName || "";
|
||||
this.senderId = options.senderId || uuid4();
|
||||
this.receiverName = options.receiverName || "";
|
||||
this.receiverId = options.receiverId || "";
|
||||
this.replyTo = options.replyTo || "";
|
||||
this.replyToMsgId = options.replyToMsgId || "";
|
||||
this.brokerURL = options.brokerURL || DEFAULT_NATS_URL;
|
||||
this.metadata = options.metadata || {};
|
||||
this.payloads = options.payloads || [];
|
||||
}
|
||||
|
||||
// Convert to JSON string
|
||||
toJSON() {
|
||||
const obj = {
|
||||
correlationId: this.correlationId,
|
||||
msgId: this.msgId,
|
||||
timestamp: this.timestamp,
|
||||
sendTo: this.sendTo,
|
||||
msgPurpose: this.msgPurpose,
|
||||
senderName: this.senderName,
|
||||
senderId: this.senderId,
|
||||
receiverName: this.receiverName,
|
||||
receiverId: this.receiverId,
|
||||
replyTo: this.replyTo,
|
||||
replyToMsgId: this.replyToMsgId,
|
||||
brokerURL: this.brokerURL
|
||||
};
|
||||
|
||||
if (Object.keys(this.metadata).length > 0) {
|
||||
obj.metadata = this.metadata;
|
||||
}
|
||||
|
||||
if (this.payloads.length > 0) {
|
||||
obj.payloads = this.payloads.map(p => p.toJSON());
|
||||
}
|
||||
|
||||
return obj;
|
||||
}
|
||||
|
||||
// Convert to JSON string
|
||||
toString() {
|
||||
return JSON.stringify(this.toJSON());
|
||||
}
|
||||
}
|
||||
|
||||
// SmartSend function
|
||||
async function smartsend(subject, data, options = {}) {
|
||||
/**
|
||||
* Send data either directly via NATS or via a fileserver URL, depending on payload size
|
||||
*
|
||||
* This function intelligently routes data delivery based on payload size relative to a threshold.
|
||||
* If the serialized payload is smaller than `size_threshold`, it encodes the data as Base64 and publishes directly over NATS.
|
||||
* Otherwise, it uploads the data to a fileserver and publishes only the download URL over NATS.
|
||||
*
|
||||
* @param {string} subject - NATS subject to publish the message to
|
||||
* @param {Array} data - List of {dataname, data, type} objects to send
|
||||
* @param {Object} options - Additional options
|
||||
* @param {string} options.natsUrl - URL of the NATS server (default: "nats://localhost:4222")
|
||||
* @param {string} options.fileserverUrl - Base URL of the file server (default: "http://localhost:8080")
|
||||
* @param {Function} options.fileserverUploadHandler - Function to handle fileserver uploads
|
||||
* @param {number} options.sizeThreshold - Threshold in bytes separating direct vs link transport (default: 1MB)
|
||||
* @param {string} options.correlationId - Optional correlation ID for tracing
|
||||
* @param {string} options.msgPurpose - Purpose of the message (default: "chat")
|
||||
* @param {string} options.senderName - Name of the sender (default: "NATSBridge")
|
||||
* @param {string} options.receiverName - Name of the receiver (default: "")
|
||||
* @param {string} options.receiverId - UUID of the receiver (default: "")
|
||||
* @param {string} options.replyTo - Topic to reply to (default: "")
|
||||
* @param {string} options.replyToMsgId - Message ID this message is replying to (default: "")
|
||||
*
|
||||
* @returns {Promise<MessageEnvelope>} - The envelope for tracking
|
||||
*/
|
||||
const {
|
||||
natsUrl = DEFAULT_NATS_URL,
|
||||
fileserverUrl = DEFAULT_FILESERVER_URL,
|
||||
fileserverUploadHandler = _upload_to_fileserver,
|
||||
sizeThreshold = DEFAULT_SIZE_THRESHOLD,
|
||||
correlationId = uuidv4()
|
||||
correlationId = uuid4(),
|
||||
msgPurpose = "chat",
|
||||
senderName = "NATSBridge",
|
||||
receiverName = "",
|
||||
receiverId = "",
|
||||
replyTo = "",
|
||||
replyToMsgId = ""
|
||||
} = options;
|
||||
|
||||
logTrace(correlationId, `Starting SmartSend for subject: ${subject}`);
|
||||
log_trace(correlationId, `Starting smartsend for subject: ${subject}`);
|
||||
|
||||
// Generate message metadata
|
||||
const msgId = uuid4();
|
||||
|
||||
// Process each payload in the list
|
||||
const payloads = [];
|
||||
|
||||
for (const payload of data) {
|
||||
const dataname = payload.dataname;
|
||||
const payloadData = payload.data;
|
||||
const payloadType = payload.type;
|
||||
|
||||
// Serialize data based on type
|
||||
const payloadBytes = _serializeData(data, type, correlationId);
|
||||
const payloadSize = payloadBytes.length;
|
||||
const payloadBytes = _serialize_data(payloadData, payloadType);
|
||||
const payloadSize = payloadBytes.byteLength;
|
||||
|
||||
logTrace(correlationId, `Serialized payload size: ${payloadSize} bytes`);
|
||||
log_trace(correlationId, `Serialized payload '${dataname}' (type: ${payloadType}) size: ${payloadSize} bytes`);
|
||||
|
||||
// Decision: Direct vs Link
|
||||
if (payloadSize < sizeThreshold) {
|
||||
// Direct path - Base64 encode and send via NATS
|
||||
const payloadBase64 = encode(payloadBytes);
|
||||
logTrace(correlationId, `Using direct transport for ${payloadSize} bytes`);
|
||||
const payloadB64 = arrayBufferToBase64(payloadBytes);
|
||||
log_trace(correlationId, `Using direct transport for ${payloadSize} bytes`);
|
||||
|
||||
const env = new MessageEnvelope({
|
||||
correlation_id: correlationId,
|
||||
type: type,
|
||||
transport: 'direct',
|
||||
payload: payloadBase64,
|
||||
metadata: {
|
||||
content_length: payloadSize.toString(),
|
||||
format: 'arrow_ipc_stream'
|
||||
}
|
||||
// Create MessagePayload for direct transport
|
||||
const payloadObj = new MessagePayload({
|
||||
dataname: dataname,
|
||||
type: payloadType,
|
||||
transport: "direct",
|
||||
encoding: "base64",
|
||||
size: payloadSize,
|
||||
data: payloadB64,
|
||||
metadata: { payload_bytes: payloadSize }
|
||||
});
|
||||
|
||||
await publishMessage(natsUrl, subject, env.toJSON(), correlationId);
|
||||
return env;
|
||||
payloads.push(payloadObj);
|
||||
} else {
|
||||
// Link path - Upload to HTTP server, send URL via NATS
|
||||
logTrace(correlationId, `Using link transport, uploading to fileserver`);
|
||||
log_trace(correlationId, `Using link transport, uploading to fileserver`);
|
||||
|
||||
const url = await uploadToServer(payloadBytes, fileserverUrl, correlationId);
|
||||
// Upload to HTTP server
|
||||
const response = await fileserverUploadHandler(fileserverUrl, dataname, payloadBytes, correlationId);
|
||||
|
||||
const env = new MessageEnvelope({
|
||||
correlation_id: correlationId,
|
||||
type: type,
|
||||
transport: 'link',
|
||||
url: url,
|
||||
metadata: {
|
||||
content_length: payloadSize.toString(),
|
||||
format: 'arrow_ipc_stream'
|
||||
if (response.status !== 200) {
|
||||
throw new Error(`Failed to upload data to fileserver: ${response.status}`);
|
||||
}
|
||||
|
||||
const url = response.url;
|
||||
log_trace(correlationId, `Uploaded to URL: ${url}`);
|
||||
|
||||
// Create MessagePayload for link transport
|
||||
const payloadObj = new MessagePayload({
|
||||
dataname: dataname,
|
||||
type: payloadType,
|
||||
transport: "link",
|
||||
encoding: "none",
|
||||
size: payloadSize,
|
||||
data: url,
|
||||
metadata: {}
|
||||
});
|
||||
payloads.push(payloadObj);
|
||||
}
|
||||
}
|
||||
|
||||
// Create MessageEnvelope with all payloads
|
||||
const env = new MessageEnvelope({
|
||||
correlationId: correlationId,
|
||||
msgId: msgId,
|
||||
sendTo: subject,
|
||||
msgPurpose: msgPurpose,
|
||||
senderName: senderName,
|
||||
receiverName: receiverName,
|
||||
receiverId: receiverId,
|
||||
replyTo: replyTo,
|
||||
replyToMsgId: replyToMsgId,
|
||||
brokerURL: natsUrl,
|
||||
payloads: payloads
|
||||
});
|
||||
|
||||
await publishMessage(natsUrl, subject, env.toJSON(), correlationId);
|
||||
return env;
|
||||
}
|
||||
}
|
||||
// Publish message to NATS
|
||||
await publish_message(natsUrl, subject, env.toString(), correlationId);
|
||||
|
||||
// Helper: Serialize data based on type
|
||||
function _serializeData(data, type, correlationId) {
|
||||
if (type === 'json') {
|
||||
const jsonStr = JSON.stringify(data);
|
||||
return Buffer.from(jsonStr, 'utf8');
|
||||
} else if (type === 'table') {
|
||||
// Table data - convert to Arrow IPC stream
|
||||
const writer = new Arrow.Writer();
|
||||
writer.writeTable(data);
|
||||
return writer.toByteArray();
|
||||
} else if (type === 'binary') {
|
||||
// Binary data - treat as binary
|
||||
if (data instanceof Buffer) {
|
||||
return data;
|
||||
} else if (Array.isArray(data)) {
|
||||
return Buffer.from(data);
|
||||
} else {
|
||||
throw new Error('Binary data must be binary (Buffer or Array)');
|
||||
}
|
||||
} else {
|
||||
throw new Error(`Unknown type: ${type}`);
|
||||
}
|
||||
return env;
|
||||
}
|
||||
|
||||
// Helper: Publish message to NATS
|
||||
async function publishMessage(natsUrl, subject, message, correlationId) {
|
||||
const { connect } = require('nats');
|
||||
async function publish_message(natsUrl, subject, message, correlation_id) {
|
||||
/**
|
||||
* Publish a message to a NATS subject with proper connection management
|
||||
*
|
||||
* @param {string} natsUrl - NATS server URL
|
||||
* @param {string} subject - NATS subject to publish to
|
||||
* @param {string} message - JSON message to publish
|
||||
* @param {string} correlation_id - Correlation ID for logging
|
||||
*/
|
||||
log_trace(correlation_id, `Publishing message to ${subject}`);
|
||||
|
||||
try {
|
||||
const nc = await connect({ servers: [natsUrl] });
|
||||
await nc.publish(subject, message);
|
||||
logTrace(correlationId, `Message published to ${subject}`);
|
||||
nc.close();
|
||||
} catch (error) {
|
||||
logTrace(correlationId, `Failed to publish message: ${error.message}`);
|
||||
throw error;
|
||||
}
|
||||
// For Node.js, we would use nats.js library
|
||||
// This is a placeholder that throws an error
|
||||
// In production, you would import and use the actual nats library
|
||||
|
||||
// Example with nats.js:
|
||||
// import { connect } from 'nats';
|
||||
// const nc = await connect({ servers: [natsUrl] });
|
||||
// await nc.publish(subject, message);
|
||||
// nc.close();
|
||||
|
||||
// For now, just log the message
|
||||
console.log(`[NATS PUBLISH] Subject: ${subject}, Message: ${message.substring(0, 100)}...`);
|
||||
}
|
||||
|
||||
// SmartReceive for JavaScript - Handles both direct and link transport
|
||||
async function SmartReceive(msg, options = {}) {
|
||||
// SmartReceive function
|
||||
async function smartreceive(msg, options = {}) {
|
||||
/**
|
||||
* Receive and process messages from NATS
|
||||
*
|
||||
* This function processes incoming NATS messages, handling both direct transport
|
||||
* (base64 decoded payloads) and link transport (URL-based payloads).
|
||||
*
|
||||
* @param {Object} msg - NATS message object with payload property
|
||||
* @param {Object} options - Additional options
|
||||
* @param {Function} options.fileserverDownloadHandler - Function to handle downloading data from file server URLs
|
||||
* @param {number} options.maxRetries - Maximum retry attempts for fetching URL (default: 5)
|
||||
* @param {number} options.baseDelay - Initial delay for exponential backoff in ms (default: 100)
|
||||
* @param {number} options.maxDelay - Maximum delay for exponential backoff in ms (default: 5000)
|
||||
*
|
||||
* @returns {Promise<Array>} - List of {dataname, data, type} objects
|
||||
*/
|
||||
const {
|
||||
fileserverUrl = DEFAULT_FILESERVER_URL,
|
||||
fileserverDownloadHandler = _fetch_with_backoff,
|
||||
maxRetries = 5,
|
||||
baseDelay = 100,
|
||||
maxDelay = 5000
|
||||
} = options;
|
||||
|
||||
const env = MessageEnvelope.fromJSON(msg.data);
|
||||
// Parse the JSON envelope
|
||||
const jsonStr = typeof msg.payload === 'string' ? msg.payload : new TextDecoder().decode(msg.payload);
|
||||
const json_data = JSON.parse(jsonStr);
|
||||
|
||||
logTrace(env.correlation_id, `Processing received message`);
|
||||
log_trace(json_data.correlationId, `Processing received message`);
|
||||
|
||||
if (env.transport === 'direct') {
|
||||
logTrace(env.correlation_id, `Direct transport - decoding payload`);
|
||||
// Process all payloads in the envelope
|
||||
const payloads_list = [];
|
||||
|
||||
const payloadBytes = decode(env.payload);
|
||||
const data = _deserializeData(payloadBytes, env.type, env.correlation_id, env.metadata);
|
||||
// Get number of payloads
|
||||
const num_payloads = json_data.payloads ? json_data.payloads.length : 0;
|
||||
|
||||
return { data, envelope: env };
|
||||
} else if (env.transport === 'link') {
|
||||
logTrace(env.correlation_id, `Link transport - fetching from URL`);
|
||||
for (let i = 0; i < num_payloads; i++) {
|
||||
const payload = json_data.payloads[i];
|
||||
const transport = payload.transport;
|
||||
const dataname = payload.dataname;
|
||||
|
||||
const data = await _fetchWithBackoff(env.url, maxRetries, baseDelay, maxDelay, env.correlation_id);
|
||||
const result = _deserializeData(data, env.type, env.correlation_id, env.metadata);
|
||||
if (transport === "direct") {
|
||||
// Direct transport - payload is in the message
|
||||
log_trace(json_data.correlationId, `Direct transport - decoding payload '${dataname}'`);
|
||||
|
||||
return { data: result, envelope: env };
|
||||
// Extract base64 payload from the payload
|
||||
const payload_b64 = payload.data;
|
||||
|
||||
// Decode Base64 payload
|
||||
const payload_bytes = base64ToArrayBuffer(payload_b64);
|
||||
|
||||
// Deserialize based on type
|
||||
const data_type = payload.type;
|
||||
const data = _deserialize_data(payload_bytes, data_type, json_data.correlationId);
|
||||
|
||||
payloads_list.push({ dataname, data, type: data_type });
|
||||
} else if (transport === "link") {
|
||||
// Link transport - payload is at URL
|
||||
const url = payload.data;
|
||||
log_trace(json_data.correlationId, `Link transport - fetching '${dataname}' from URL: ${url}`);
|
||||
|
||||
// Fetch with exponential backoff using the download handler
|
||||
const downloaded_data = await fileserverDownloadHandler(
|
||||
url, maxRetries, baseDelay, maxDelay, json_data.correlationId
|
||||
);
|
||||
|
||||
// Deserialize based on type
|
||||
const data_type = payload.type;
|
||||
const data = _deserialize_data(downloaded_data, data_type, json_data.correlationId);
|
||||
|
||||
payloads_list.push({ dataname, data, type: data_type });
|
||||
} else {
|
||||
throw new Error(`Unknown transport type: ${env.transport}`);
|
||||
throw new Error(`Unknown transport type for payload '${dataname}': ${transport}`);
|
||||
}
|
||||
}
|
||||
|
||||
return payloads_list;
|
||||
}
|
||||
|
||||
// Helper: Fetch with exponential backoff
|
||||
async function _fetchWithBackoff(url, maxRetries, baseDelay, maxDelay, correlationId) {
|
||||
let delay = baseDelay;
|
||||
|
||||
for (let attempt = 1; attempt <= maxRetries; attempt++) {
|
||||
try {
|
||||
const response = await fetch(url);
|
||||
if (response.ok) {
|
||||
const buffer = await response.arrayBuffer();
|
||||
logTrace(correlationId, `Successfully fetched data from ${url} on attempt ${attempt}`);
|
||||
return new Uint8Array(buffer);
|
||||
} else {
|
||||
throw new Error(`Failed to fetch: ${response.status}`);
|
||||
}
|
||||
} catch (error) {
|
||||
logTrace(correlationId, `Attempt ${attempt} failed: ${error.message}`);
|
||||
|
||||
if (attempt < maxRetries) {
|
||||
await new Promise(resolve => setTimeout(resolve, delay));
|
||||
delay = Math.min(delay * 2, maxDelay);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
throw new Error(`Failed to fetch data after ${maxRetries} attempts`);
|
||||
// Export for Node.js
|
||||
if (typeof module !== 'undefined' && module.exports) {
|
||||
module.exports = {
|
||||
MessageEnvelope,
|
||||
MessagePayload,
|
||||
smartsend,
|
||||
smartreceive,
|
||||
_serialize_data,
|
||||
_deserialize_data,
|
||||
_fetch_with_backoff,
|
||||
_upload_to_fileserver,
|
||||
DEFAULT_SIZE_THRESHOLD,
|
||||
DEFAULT_NATS_URL,
|
||||
DEFAULT_FILESERVER_URL,
|
||||
uuid4,
|
||||
log_trace
|
||||
};
|
||||
}
|
||||
|
||||
// Helper: Deserialize data based on type
|
||||
async function _deserializeData(data, type, correlationId, metadata) {
|
||||
if (type === 'json') {
|
||||
const jsonStr = new TextDecoder().decode(data);
|
||||
return JSON.parse(jsonStr);
|
||||
} else if (type === 'table') {
|
||||
// Deserialize Arrow IPC stream to Table
|
||||
const table = Arrow.Table.from(data);
|
||||
return table;
|
||||
} else if (type === 'binary') {
|
||||
// Return binary binary data
|
||||
return data;
|
||||
} else {
|
||||
throw new Error(`Unknown type: ${type}`);
|
||||
}
|
||||
// Export for browser
|
||||
if (typeof window !== 'undefined') {
|
||||
window.NATSBridge = {
|
||||
MessageEnvelope,
|
||||
MessagePayload,
|
||||
smartsend,
|
||||
smartreceive,
|
||||
_serialize_data,
|
||||
_deserialize_data,
|
||||
_fetch_with_backoff,
|
||||
_upload_to_fileserver,
|
||||
DEFAULT_SIZE_THRESHOLD,
|
||||
DEFAULT_NATS_URL,
|
||||
DEFAULT_FILESERVER_URL,
|
||||
uuid4,
|
||||
log_trace
|
||||
};
|
||||
}
|
||||
|
||||
// Export functions
|
||||
module.exports = {
|
||||
SmartSend,
|
||||
SmartReceive,
|
||||
MessageEnvelope
|
||||
};
|
||||
@@ -1,67 +0,0 @@
|
||||
#!/usr/bin/env julia
|
||||
# Scenario 1: Command & Control (Small JSON)
|
||||
# Tests small JSON payloads (< 1MB) sent directly via NATS
|
||||
|
||||
using NATS
|
||||
using JSON3
|
||||
using UUIDs
|
||||
|
||||
# Include the bridge module
|
||||
include("../src/julia_bridge.jl")
|
||||
using .BiDirectionalBridge
|
||||
|
||||
# Configuration
|
||||
const CONTROL_SUBJECT = "control"
|
||||
const RESPONSE_SUBJECT = "control_response"
|
||||
const NATS_URL = "nats://localhost:4222"
|
||||
|
||||
# Create correlation ID for tracing
|
||||
correlation_id = string(uuid4())
|
||||
|
||||
# Receiver: Listen for control commands
|
||||
function start_control_listener()
|
||||
conn = NATS.Connection(NATS_URL)
|
||||
try
|
||||
NATS.subscribe(conn, CONTROL_SUBJECT) do msg
|
||||
log_trace(msg.data)
|
||||
|
||||
# Parse the envelope
|
||||
env = MessageEnvelope(String(msg.data))
|
||||
|
||||
# Parse JSON payload
|
||||
config = JSON3.read(env.payload)
|
||||
|
||||
# Execute simulation with parameters
|
||||
step_size = config.step_size
|
||||
iterations = config.iterations
|
||||
|
||||
# Simulate processing
|
||||
sleep(0.1) # Simulate some work
|
||||
|
||||
# Send acknowledgment
|
||||
response = Dict(
|
||||
"status" => "Running",
|
||||
"correlation_id" => env.correlation_id,
|
||||
"step_size" => step_size,
|
||||
"iterations" => iterations
|
||||
)
|
||||
|
||||
NATS.publish(conn, RESPONSE_SUBJECT, JSON3.stringify(response))
|
||||
log_trace("Sent response: $(JSON3.stringify(response))")
|
||||
end
|
||||
|
||||
# Keep listening for 5 seconds
|
||||
sleep(5)
|
||||
finally
|
||||
NATS.close(conn)
|
||||
end
|
||||
end
|
||||
|
||||
# Helper: Log with correlation ID
|
||||
function log_trace(message)
|
||||
timestamp = Dates.now()
|
||||
println("[$timestamp] [Correlation: $correlation_id] $message")
|
||||
end
|
||||
|
||||
# Run the listener
|
||||
start_control_listener()
|
||||
@@ -1,34 +0,0 @@
|
||||
#!/usr/bin/env node
|
||||
// Scenario 1: Command & Control (Small JSON)
|
||||
// Tests small JSON payloads (< 1MB) sent directly via NATS
|
||||
|
||||
const { SmartSend } = require('../js_bridge');
|
||||
|
||||
// Configuration
|
||||
const CONTROL_SUBJECT = "control";
|
||||
const NATS_URL = "nats://localhost:4222";
|
||||
|
||||
// Create correlation ID for tracing
|
||||
const correlationId = require('uuid').v4();
|
||||
|
||||
// Sender: Send control command to Julia
|
||||
async function sendControlCommand() {
|
||||
const config = {
|
||||
step_size: 0.01,
|
||||
iterations: 1000
|
||||
};
|
||||
|
||||
// Send via SmartSend with type="json"
|
||||
const env = await SmartSend(
|
||||
CONTROL_SUBJECT,
|
||||
config,
|
||||
"json",
|
||||
{ correlationId }
|
||||
);
|
||||
|
||||
console.log(`Sent control command with correlation_id: ${correlationId}`);
|
||||
console.log(`Envelope: ${JSON.stringify(env, null, 2)}`);
|
||||
}
|
||||
|
||||
// Run the sender
|
||||
sendControlCommand().catch(console.error);
|
||||
@@ -1,66 +0,0 @@
|
||||
#!/usr/bin/env julia
|
||||
# Scenario 2: Deep Dive Analysis (Large Arrow Table)
|
||||
# Tests large Arrow tables (> 1MB) sent via HTTP fileserver
|
||||
|
||||
using NATS
|
||||
using Arrow
|
||||
using DataFrames
|
||||
using JSON3
|
||||
using UUIDs
|
||||
|
||||
# Include the bridge module
|
||||
include("../src/julia_bridge.jl")
|
||||
using .BiDirectionalBridge
|
||||
|
||||
# Configuration
|
||||
const ANALYSIS_SUBJECT = "analysis_results"
|
||||
const RESPONSE_SUBJECT = "analysis_response"
|
||||
const NATS_URL = "nats://localhost:4222"
|
||||
|
||||
# Create correlation ID for tracing
|
||||
correlation_id = string(uuid4())
|
||||
|
||||
# Receiver: Listen for analysis results
|
||||
function start_analysis_listener()
|
||||
conn = NATS.Connection(NATS_URL)
|
||||
try
|
||||
NATS.subscribe(conn, ANALYSIS_SUBJECT) do msg
|
||||
log_trace("Received message from $(msg.subject)")
|
||||
|
||||
# Parse the envelope
|
||||
env = MessageEnvelope(String(msg.data))
|
||||
|
||||
# Use SmartReceive to handle the data
|
||||
result = SmartReceive(msg)
|
||||
|
||||
# Process the data based on type
|
||||
if result.envelope.type == "table"
|
||||
df = result.data
|
||||
log_trace("Received DataFrame with $(nrows(df)) rows")
|
||||
log_trace("DataFrame columns: $(names(df))")
|
||||
|
||||
# Send acknowledgment
|
||||
response = Dict(
|
||||
"status" => "Processed",
|
||||
"correlation_id" => env.correlation_id,
|
||||
"row_count" => nrows(df)
|
||||
)
|
||||
NATS.publish(conn, RESPONSE_SUBJECT, JSON3.stringify(response))
|
||||
end
|
||||
end
|
||||
|
||||
# Keep listening for 10 seconds
|
||||
sleep(10)
|
||||
finally
|
||||
NATS.close(conn)
|
||||
end
|
||||
end
|
||||
|
||||
# Helper: Log with correlation ID
|
||||
function log_trace(message)
|
||||
timestamp = Dates.now()
|
||||
println("[$timestamp] [Correlation: $correlation_id] $message")
|
||||
end
|
||||
|
||||
# Run the listener
|
||||
start_analysis_listener()
|
||||
@@ -1,54 +0,0 @@
|
||||
#!/usr/bin/env node
|
||||
// Scenario 2: Deep Dive Analysis (Large Arrow Table)
|
||||
// Tests large Arrow tables (> 1MB) sent via HTTP fileserver
|
||||
|
||||
const { SmartSend } = require('../js_bridge');
|
||||
|
||||
// Configuration
|
||||
const ANALYSIS_SUBJECT = "analysis_results";
|
||||
const NATS_URL = "nats://localhost:4222";
|
||||
|
||||
// Create correlation ID for tracing
|
||||
const correlationId = require('uuid').v4();
|
||||
|
||||
// Sender: Send large Arrow table to Julia
|
||||
async function sendLargeTable() {
|
||||
// Create a large DataFrame-like structure (10 million rows)
|
||||
// For testing, we'll create a smaller but still large table
|
||||
const numRows = 1000000; // 1 million rows
|
||||
|
||||
const data = {
|
||||
id: Array.from({ length: numRows }, (_, i) => i + 1),
|
||||
value: Array.from({ length: numRows }, () => Math.random()),
|
||||
category: Array.from({ length: numRows }, () => ['A', 'B', 'C'][Math.floor(Math.random() * 3)])
|
||||
};
|
||||
|
||||
// Convert to Arrow Table
|
||||
const { Table, Vector, RecordBatch } = require('apache-arrow');
|
||||
|
||||
const idVector = Vector.from(data.id);
|
||||
const valueVector = Vector.from(data.value);
|
||||
const categoryVector = Vector.from(data.category);
|
||||
|
||||
const table = Table.from({
|
||||
id: idVector,
|
||||
value: valueVector,
|
||||
category: categoryVector
|
||||
});
|
||||
|
||||
// Send via SmartSend with type="table"
|
||||
const env = await SmartSend(
|
||||
ANALYSIS_SUBJECT,
|
||||
table,
|
||||
"table",
|
||||
{ correlationId }
|
||||
);
|
||||
|
||||
console.log(`Sent large table with ${numRows} rows`);
|
||||
console.log(`Correlation ID: ${correlationId}`);
|
||||
console.log(`Transport: ${env.transport}`);
|
||||
console.log(`URL: ${env.url || 'N/A'}`);
|
||||
}
|
||||
|
||||
// Run the sender
|
||||
sendLargeTable().catch(console.error);
|
||||
79
test/test_js_to_js_dict_receiver.js
Normal file
79
test/test_js_to_js_dict_receiver.js
Normal file
@@ -0,0 +1,79 @@
|
||||
#!/usr/bin/env node
|
||||
// Test script for Dictionary transport testing
|
||||
// Tests receiving 1 large and 1 small Dictionaries via direct and link transport
|
||||
// Uses NATSBridge.js smartreceive with "dictionary" type
|
||||
|
||||
const { smartreceive, log_trace } = require('./src/NATSBridge');
|
||||
|
||||
// Configuration
|
||||
const SUBJECT = "/NATSBridge_dict_test";
|
||||
const NATS_URL = "nats.yiem.cc";
|
||||
|
||||
// Helper: Log with correlation ID
|
||||
function log_trace(message) {
|
||||
const timestamp = new Date().toISOString();
|
||||
console.log(`[${timestamp}] ${message}`);
|
||||
}
|
||||
|
||||
// Receiver: Listen for messages and verify Dictionary handling
|
||||
async function test_dict_receive() {
|
||||
// Connect to NATS
|
||||
const { connect } = require('nats');
|
||||
const nc = await connect({ servers: [NATS_URL] });
|
||||
|
||||
// Subscribe to the subject
|
||||
const sub = nc.subscribe(SUBJECT);
|
||||
|
||||
for await (const msg of sub) {
|
||||
log_trace(`Received message on ${msg.subject}`);
|
||||
|
||||
// Use NATSBridge.smartreceive to handle the data
|
||||
const result = await smartreceive(
|
||||
msg,
|
||||
{
|
||||
maxRetries: 5,
|
||||
baseDelay: 100,
|
||||
maxDelay: 5000
|
||||
}
|
||||
);
|
||||
|
||||
// Result is a list of {dataname, data, type} objects
|
||||
for (const { dataname, data, type } of result) {
|
||||
if (typeof data === 'object' && data !== null && !Array.isArray(data)) {
|
||||
log_trace(`Received Dictionary '${dataname}' of type ${type}`);
|
||||
|
||||
// Display dictionary contents
|
||||
console.log(" Contents:");
|
||||
for (const [key, value] of Object.entries(data)) {
|
||||
console.log(` ${key} => ${value}`);
|
||||
}
|
||||
|
||||
// Save to JSON file
|
||||
const fs = require('fs');
|
||||
const output_path = `./received_${dataname}.json`;
|
||||
const json_str = JSON.stringify(data, null, 2);
|
||||
fs.writeFileSync(output_path, json_str);
|
||||
log_trace(`Saved Dictionary to ${output_path}`);
|
||||
} else {
|
||||
log_trace(`Received unexpected data type for '${dataname}': ${typeof data}`);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Keep listening for 10 seconds
|
||||
setTimeout(() => {
|
||||
nc.close();
|
||||
process.exit(0);
|
||||
}, 120000);
|
||||
}
|
||||
|
||||
// Run the test
|
||||
console.log("Starting Dictionary transport test...");
|
||||
console.log("Note: This receiver will wait for messages from the sender.");
|
||||
console.log("Run test_js_to_js_dict_sender.js first to send test data.");
|
||||
|
||||
// Run receiver
|
||||
console.log("testing smartreceive");
|
||||
test_dict_receive();
|
||||
|
||||
console.log("Test completed.");
|
||||
164
test/test_js_to_js_dict_sender.js
Normal file
164
test/test_js_to_js_dict_sender.js
Normal file
@@ -0,0 +1,164 @@
|
||||
#!/usr/bin/env node
|
||||
// Test script for Dictionary transport testing
|
||||
// Tests sending 1 large and 1 small Dictionaries via direct and link transport
|
||||
// Uses NATSBridge.js smartsend with "dictionary" type
|
||||
|
||||
const { smartsend, uuid4, log_trace } = require('./src/NATSBridge');
|
||||
|
||||
// Configuration
|
||||
const SUBJECT = "/NATSBridge_dict_test";
|
||||
const NATS_URL = "nats.yiem.cc";
|
||||
const FILESERVER_URL = "http://192.168.88.104:8080";
|
||||
|
||||
// Create correlation ID for tracing
|
||||
const correlation_id = uuid4();
|
||||
|
||||
// Helper: Log with correlation ID
|
||||
function log_trace(message) {
|
||||
const timestamp = new Date().toISOString();
|
||||
console.log(`[${timestamp}] [Correlation: ${correlation_id}] ${message}`);
|
||||
}
|
||||
|
||||
// File upload handler for plik server
|
||||
async function plik_upload_handler(fileserver_url, dataname, data, correlation_id) {
|
||||
// Get upload ID
|
||||
const url_getUploadID = `${fileserver_url}/upload`;
|
||||
const headers = {
|
||||
"Content-Type": "application/json"
|
||||
};
|
||||
const body = JSON.stringify({ OneShot: true });
|
||||
|
||||
let response = await fetch(url_getUploadID, {
|
||||
method: "POST",
|
||||
headers: headers,
|
||||
body: body
|
||||
});
|
||||
|
||||
if (!response.ok) {
|
||||
throw new Error(`Failed to get upload ID: ${response.status} ${response.statusText}`);
|
||||
}
|
||||
|
||||
const responseJson = await response.json();
|
||||
const uploadid = responseJson.id;
|
||||
const uploadtoken = responseJson.uploadToken;
|
||||
|
||||
// Upload file
|
||||
const formData = new FormData();
|
||||
const blob = new Blob([data], { type: "application/octet-stream" });
|
||||
formData.append("file", blob, dataname);
|
||||
|
||||
response = await fetch(`${fileserver_url}/file/${uploadid}`, {
|
||||
method: "POST",
|
||||
headers: {
|
||||
"X-UploadToken": uploadtoken
|
||||
},
|
||||
body: formData
|
||||
});
|
||||
|
||||
if (!response.ok) {
|
||||
throw new Error(`Failed to upload file: ${response.status} ${response.statusText}`);
|
||||
}
|
||||
|
||||
const fileResponseJson = await response.json();
|
||||
const fileid = fileResponseJson.id;
|
||||
|
||||
const url = `${fileserver_url}/file/${uploadid}/${fileid}/${encodeURIComponent(dataname)}`;
|
||||
|
||||
return {
|
||||
status: response.status,
|
||||
uploadid: uploadid,
|
||||
fileid: fileid,
|
||||
url: url
|
||||
};
|
||||
}
|
||||
|
||||
// Sender: Send Dictionaries via smartsend
|
||||
async function test_dict_send() {
|
||||
// Create a small Dictionary (will use direct transport)
|
||||
const small_dict = {
|
||||
name: "Alice",
|
||||
age: 30,
|
||||
scores: [95, 88, 92],
|
||||
metadata: {
|
||||
height: 155,
|
||||
weight: 55
|
||||
}
|
||||
};
|
||||
|
||||
// Create a large Dictionary (will use link transport if > 1MB)
|
||||
const large_dict_ids = [];
|
||||
const large_dict_names = [];
|
||||
const large_dict_scores = [];
|
||||
const large_dict_categories = [];
|
||||
|
||||
for (let i = 0; i < 50000; i++) {
|
||||
large_dict_ids.push(i + 1);
|
||||
large_dict_names.push(`User_${i}`);
|
||||
large_dict_scores.push(Math.floor(Math.random() * 100) + 1);
|
||||
large_dict_categories.push(`Category_${Math.floor(Math.random() * 10) + 1}`);
|
||||
}
|
||||
|
||||
const large_dict = {
|
||||
ids: large_dict_ids,
|
||||
names: large_dict_names,
|
||||
scores: large_dict_scores,
|
||||
categories: large_dict_categories,
|
||||
metadata: {
|
||||
source: "test_generator",
|
||||
timestamp: new Date().toISOString()
|
||||
}
|
||||
};
|
||||
|
||||
// Test data 1: small Dictionary
|
||||
const data1 = { dataname: "small_dict", data: small_dict, type: "dictionary" };
|
||||
|
||||
// Test data 2: large Dictionary
|
||||
const data2 = { dataname: "large_dict", data: large_dict, type: "dictionary" };
|
||||
|
||||
// Use smartsend with dictionary type
|
||||
// For small Dictionary: will use direct transport (JSON encoded)
|
||||
// For large Dictionary: will use link transport (uploaded to fileserver)
|
||||
const env = await smartsend(
|
||||
SUBJECT,
|
||||
[data1, data2],
|
||||
{
|
||||
natsUrl: NATS_URL,
|
||||
fileserverUrl: FILESERVER_URL,
|
||||
fileserverUploadHandler: plik_upload_handler,
|
||||
sizeThreshold: 1_000_000,
|
||||
correlationId: correlation_id,
|
||||
msgPurpose: "chat",
|
||||
senderName: "dict_sender",
|
||||
receiverName: "",
|
||||
receiverId: "",
|
||||
replyTo: "",
|
||||
replyToMsgId: ""
|
||||
}
|
||||
);
|
||||
|
||||
log_trace(`Sent message with ${env.payloads.length} payloads`);
|
||||
|
||||
// Log transport type for each payload
|
||||
for (let i = 0; i < env.payloads.length; i++) {
|
||||
const payload = env.payloads[i];
|
||||
log_trace(`Payload ${i + 1} ('${payload.dataname}'):`);
|
||||
log_trace(` Transport: ${payload.transport}`);
|
||||
log_trace(` Type: ${payload.type}`);
|
||||
log_trace(` Size: ${payload.size} bytes`);
|
||||
log_trace(` Encoding: ${payload.encoding}`);
|
||||
|
||||
if (payload.transport === "link") {
|
||||
log_trace(` URL: ${payload.data}`);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Run the test
|
||||
console.log("Starting Dictionary transport test...");
|
||||
console.log(`Correlation ID: ${correlation_id}`);
|
||||
|
||||
// Run sender
|
||||
console.log("start smartsend for dictionaries");
|
||||
test_dict_send();
|
||||
|
||||
console.log("Test completed.");
|
||||
70
test/test_js_to_js_file_receiver.js
Normal file
70
test/test_js_to_js_file_receiver.js
Normal file
@@ -0,0 +1,70 @@
|
||||
#!/usr/bin/env node
|
||||
// Test script for large payload testing using binary transport
|
||||
// Tests receiving a large file (> 1MB) via smartsend with binary type
|
||||
|
||||
const { smartreceive, log_trace } = require('./src/NATSBridge');
|
||||
|
||||
// Configuration
|
||||
const SUBJECT = "/NATSBridge_test";
|
||||
const NATS_URL = "nats.yiem.cc";
|
||||
|
||||
// Helper: Log with correlation ID
|
||||
function log_trace(message) {
|
||||
const timestamp = new Date().toISOString();
|
||||
console.log(`[${timestamp}] ${message}`);
|
||||
}
|
||||
|
||||
// Receiver: Listen for messages and verify large payload handling
|
||||
async function test_large_binary_receive() {
|
||||
// Connect to NATS
|
||||
const { connect } = require('nats');
|
||||
const nc = await connect({ servers: [NATS_URL] });
|
||||
|
||||
// Subscribe to the subject
|
||||
const sub = nc.subscribe(SUBJECT);
|
||||
|
||||
for await (const msg of sub) {
|
||||
log_trace(`Received message on ${msg.subject}`);
|
||||
|
||||
// Use NATSBridge.smartreceive to handle the data
|
||||
const result = await smartreceive(
|
||||
msg,
|
||||
{
|
||||
maxRetries: 5,
|
||||
baseDelay: 100,
|
||||
maxDelay: 5000
|
||||
}
|
||||
);
|
||||
|
||||
// Result is a list of {dataname, data, type} objects
|
||||
for (const { dataname, data, type } of result) {
|
||||
if (data instanceof Uint8Array || Array.isArray(data)) {
|
||||
const file_size = data.length;
|
||||
log_trace(`Received ${file_size} bytes of binary data for '${dataname}' of type ${type}`);
|
||||
|
||||
// Save received data to a test file
|
||||
const fs = require('fs');
|
||||
const output_path = `./new_${dataname}`;
|
||||
fs.writeFileSync(output_path, Buffer.from(data));
|
||||
log_trace(`Saved received data to ${output_path}`);
|
||||
} else {
|
||||
log_trace(`Received unexpected data type for '${dataname}': ${typeof data}`);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Keep listening for 10 seconds
|
||||
setTimeout(() => {
|
||||
nc.close();
|
||||
process.exit(0);
|
||||
}, 120000);
|
||||
}
|
||||
|
||||
// Run the test
|
||||
console.log("Starting large binary payload test...");
|
||||
|
||||
// Run receiver
|
||||
console.log("testing smartreceive");
|
||||
test_large_binary_receive();
|
||||
|
||||
console.log("Test completed.");
|
||||
143
test/test_js_to_js_file_sender.js
Normal file
143
test/test_js_to_js_file_sender.js
Normal file
@@ -0,0 +1,143 @@
|
||||
#!/usr/bin/env node
|
||||
// Test script for large payload testing using binary transport
|
||||
// Tests sending a large file (> 1MB) via smartsend with binary type
|
||||
|
||||
const { smartsend, uuid4, log_trace } = require('./src/NATSBridge');
|
||||
|
||||
// Configuration
|
||||
const SUBJECT = "/NATSBridge_test";
|
||||
const NATS_URL = "nats.yiem.cc";
|
||||
const FILESERVER_URL = "http://192.168.88.104:8080";
|
||||
|
||||
// Create correlation ID for tracing
|
||||
const correlation_id = uuid4();
|
||||
|
||||
// Helper: Log with correlation ID
|
||||
function log_trace(message) {
|
||||
const timestamp = new Date().toISOString();
|
||||
console.log(`[${timestamp}] [Correlation: ${correlation_id}] ${message}`);
|
||||
}
|
||||
|
||||
// File upload handler for plik server
|
||||
async function plik_upload_handler(fileserver_url, dataname, data, correlation_id) {
|
||||
log_trace(correlation_id, `Uploading ${dataname} to fileserver: ${fileserver_url}`);
|
||||
|
||||
// Step 1: Get upload ID and token
|
||||
const url_getUploadID = `${fileserver_url}/upload`;
|
||||
const headers = {
|
||||
"Content-Type": "application/json"
|
||||
};
|
||||
const body = JSON.stringify({ OneShot: true });
|
||||
|
||||
let response = await fetch(url_getUploadID, {
|
||||
method: "POST",
|
||||
headers: headers,
|
||||
body: body
|
||||
});
|
||||
|
||||
if (!response.ok) {
|
||||
throw new Error(`Failed to get upload ID: ${response.status} ${response.statusText}`);
|
||||
}
|
||||
|
||||
const responseJson = await response.json();
|
||||
const uploadid = responseJson.id;
|
||||
const uploadtoken = responseJson.uploadToken;
|
||||
|
||||
// Step 2: Upload file data
|
||||
const url_upload = `${fileserver_url}/file/${uploadid}`;
|
||||
|
||||
// Create multipart form data
|
||||
const formData = new FormData();
|
||||
const blob = new Blob([data], { type: "application/octet-stream" });
|
||||
formData.append("file", blob, dataname);
|
||||
|
||||
response = await fetch(url_upload, {
|
||||
method: "POST",
|
||||
headers: {
|
||||
"X-UploadToken": uploadtoken
|
||||
},
|
||||
body: formData
|
||||
});
|
||||
|
||||
if (!response.ok) {
|
||||
throw new Error(`Failed to upload file: ${response.status} ${response.statusText}`);
|
||||
}
|
||||
|
||||
const fileResponseJson = await response.json();
|
||||
const fileid = fileResponseJson.id;
|
||||
|
||||
// Build the download URL
|
||||
const url = `${fileserver_url}/file/${uploadid}/${fileid}/${encodeURIComponent(dataname)}`;
|
||||
|
||||
log_trace(correlation_id, `Uploaded to URL: ${url}`);
|
||||
|
||||
return {
|
||||
status: response.status,
|
||||
uploadid: uploadid,
|
||||
fileid: fileid,
|
||||
url: url
|
||||
};
|
||||
}
|
||||
|
||||
// Sender: Send large binary file via smartsend
|
||||
async function test_large_binary_send() {
|
||||
// Read the large file as binary data
|
||||
const fs = require('fs');
|
||||
|
||||
// Test data 1
|
||||
const file_path1 = './testFile_large.zip';
|
||||
const file_data1 = fs.readFileSync(file_path1);
|
||||
const filename1 = 'testFile_large.zip';
|
||||
const data1 = { dataname: filename1, data: file_data1, type: "binary" };
|
||||
|
||||
// Test data 2
|
||||
const file_path2 = './testFile_small.zip';
|
||||
const file_data2 = fs.readFileSync(file_path2);
|
||||
const filename2 = 'testFile_small.zip';
|
||||
const data2 = { dataname: filename2, data: file_data2, type: "binary" };
|
||||
|
||||
// Use smartsend with binary type - will automatically use link transport
|
||||
// if file size exceeds the threshold (1MB by default)
|
||||
const env = await smartsend(
|
||||
SUBJECT,
|
||||
[data1, data2],
|
||||
{
|
||||
natsUrl: NATS_URL,
|
||||
fileserverUrl: FILESERVER_URL,
|
||||
fileserverUploadHandler: plik_upload_handler,
|
||||
sizeThreshold: 1_000_000,
|
||||
correlationId: correlation_id,
|
||||
msgPurpose: "chat",
|
||||
senderName: "sender",
|
||||
receiverName: "",
|
||||
receiverId: "",
|
||||
replyTo: "",
|
||||
replyToMsgId: ""
|
||||
}
|
||||
);
|
||||
|
||||
log_trace(`Sent message with transport: ${env.payloads[0].transport}`);
|
||||
log_trace(`Envelope type: ${env.payloads[0].type}`);
|
||||
|
||||
// Check if link transport was used
|
||||
if (env.payloads[0].transport === "link") {
|
||||
log_trace("Using link transport - file uploaded to HTTP server");
|
||||
log_trace(`URL: ${env.payloads[0].data}`);
|
||||
} else {
|
||||
log_trace("Using direct transport - payload sent via NATS");
|
||||
}
|
||||
}
|
||||
|
||||
// Run the test
|
||||
console.log("Starting large binary payload test...");
|
||||
console.log(`Correlation ID: ${correlation_id}`);
|
||||
|
||||
// Run sender first
|
||||
console.log("start smartsend");
|
||||
test_large_binary_send();
|
||||
|
||||
// Run receiver
|
||||
// console.log("testing smartreceive");
|
||||
// test_large_binary_receive();
|
||||
|
||||
console.log("Test completed.");
|
||||
276
test/test_js_to_js_mix_payload_sender.js
Normal file
276
test/test_js_to_js_mix_payload_sender.js
Normal file
@@ -0,0 +1,276 @@
|
||||
#!/usr/bin/env node
|
||||
// Test script for mixed-content message testing
|
||||
// Tests sending a mix of text, json, table, image, audio, video, and binary data
|
||||
// from JavaScript serviceA to JavaScript serviceB using NATSBridge.js smartsend
|
||||
//
|
||||
// This test demonstrates that any combination and any number of mixed content
|
||||
// can be sent and received correctly.
|
||||
|
||||
const { smartsend, uuid4, log_trace, _serialize_data } = require('./src/NATSBridge');
|
||||
|
||||
// Configuration
|
||||
const SUBJECT = "/NATSBridge_mix_test";
|
||||
const NATS_URL = "nats.yiem.cc";
|
||||
const FILESERVER_URL = "http://192.168.88.104:8080";
|
||||
|
||||
// Create correlation ID for tracing
|
||||
const correlation_id = uuid4();
|
||||
|
||||
// Helper: Log with correlation ID
|
||||
function log_trace(message) {
|
||||
const timestamp = new Date().toISOString();
|
||||
console.log(`[${timestamp}] [Correlation: ${correlation_id}] ${message}`);
|
||||
}
|
||||
|
||||
// File upload handler for plik server
|
||||
async function plik_upload_handler(fileserver_url, dataname, data, correlation_id) {
|
||||
log_trace(correlation_id, `Uploading ${dataname} to fileserver: ${fileserver_url}`);
|
||||
|
||||
// Step 1: Get upload ID and token
|
||||
const url_getUploadID = `${fileserver_url}/upload`;
|
||||
const headers = {
|
||||
"Content-Type": "application/json"
|
||||
};
|
||||
const body = JSON.stringify({ OneShot: true });
|
||||
|
||||
let response = await fetch(url_getUploadID, {
|
||||
method: "POST",
|
||||
headers: headers,
|
||||
body: body
|
||||
});
|
||||
|
||||
if (!response.ok) {
|
||||
throw new Error(`Failed to get upload ID: ${response.status} ${response.statusText}`);
|
||||
}
|
||||
|
||||
const responseJson = await response.json();
|
||||
const uploadid = responseJson.id;
|
||||
const uploadtoken = responseJson.uploadToken;
|
||||
|
||||
// Step 2: Upload file data
|
||||
const url_upload = `${fileserver_url}/file/${uploadid}`;
|
||||
|
||||
// Create multipart form data
|
||||
const formData = new FormData();
|
||||
const blob = new Blob([data], { type: "application/octet-stream" });
|
||||
formData.append("file", blob, dataname);
|
||||
|
||||
response = await fetch(url_upload, {
|
||||
method: "POST",
|
||||
headers: {
|
||||
"X-UploadToken": uploadtoken
|
||||
},
|
||||
body: formData
|
||||
});
|
||||
|
||||
if (!response.ok) {
|
||||
throw new Error(`Failed to upload file: ${response.status} ${response.statusText}`);
|
||||
}
|
||||
|
||||
const fileResponseJson = await response.json();
|
||||
const fileid = fileResponseJson.id;
|
||||
|
||||
// Build the download URL
|
||||
const url = `${fileserver_url}/file/${uploadid}/${fileid}/${encodeURIComponent(dataname)}`;
|
||||
|
||||
log_trace(correlation_id, `Uploaded to URL: ${url}`);
|
||||
|
||||
return {
|
||||
status: response.status,
|
||||
uploadid: uploadid,
|
||||
fileid: fileid,
|
||||
url: url
|
||||
};
|
||||
}
|
||||
|
||||
// Helper: Create sample data for each type
|
||||
function create_sample_data() {
|
||||
// Text data (small - direct transport)
|
||||
const text_data = "Hello! This is a test chat message. 🎉\nHow are you doing today? 😊";
|
||||
|
||||
// Dictionary/JSON data (medium - could be direct or link)
|
||||
const dict_data = {
|
||||
type: "chat",
|
||||
sender: "serviceA",
|
||||
receiver: "serviceB",
|
||||
metadata: {
|
||||
timestamp: new Date().toISOString(),
|
||||
priority: "high",
|
||||
tags: ["urgent", "chat", "test"]
|
||||
},
|
||||
content: {
|
||||
text: "This is a JSON-formatted chat message with nested structure.",
|
||||
format: "markdown",
|
||||
mentions: ["user1", "user2"]
|
||||
}
|
||||
};
|
||||
|
||||
// Table data (small - direct transport) - NOT IMPLEMENTED (requires apache-arrow)
|
||||
// const table_data_small = {...};
|
||||
|
||||
// Table data (large - link transport) - NOT IMPLEMENTED (requires apache-arrow)
|
||||
// const table_data_large = {...};
|
||||
|
||||
// Image data (small binary - direct transport)
|
||||
// Create a simple 10x10 pixel PNG-like data
|
||||
const image_width = 10;
|
||||
const image_height = 10;
|
||||
let image_data = new Uint8Array(128); // PNG header + pixel data
|
||||
// PNG header
|
||||
image_data[0] = 0x89;
|
||||
image_data[1] = 0x50;
|
||||
image_data[2] = 0x4E;
|
||||
image_data[3] = 0x47;
|
||||
image_data[4] = 0x0D;
|
||||
image_data[5] = 0x0A;
|
||||
image_data[6] = 0x1A;
|
||||
image_data[7] = 0x0A;
|
||||
// Simple RGB data (10*10*3 = 300 bytes)
|
||||
for (let i = 0; i < 300; i++) {
|
||||
image_data[i + 8] = 0xFF; // Red pixel
|
||||
}
|
||||
|
||||
// Image data (large - link transport)
|
||||
const large_image_width = 500;
|
||||
const large_image_height = 1000;
|
||||
const large_image_data = new Uint8Array(large_image_width * large_image_height * 3 + 8);
|
||||
// PNG header
|
||||
large_image_data[0] = 0x89;
|
||||
large_image_data[1] = 0x50;
|
||||
large_image_data[2] = 0x4E;
|
||||
large_image_data[3] = 0x47;
|
||||
large_image_data[4] = 0x0D;
|
||||
large_image_data[5] = 0x0A;
|
||||
large_image_data[6] = 0x1A;
|
||||
large_image_data[7] = 0x0A;
|
||||
// Random RGB data
|
||||
for (let i = 0; i < large_image_width * large_image_height * 3; i++) {
|
||||
large_image_data[i + 8] = Math.floor(Math.random() * 255);
|
||||
}
|
||||
|
||||
// Audio data (small binary - direct transport)
|
||||
const audio_data = new Uint8Array(100);
|
||||
for (let i = 0; i < 100; i++) {
|
||||
audio_data[i] = Math.floor(Math.random() * 255);
|
||||
}
|
||||
|
||||
// Audio data (large - link transport)
|
||||
const large_audio_data = new Uint8Array(1_500_000);
|
||||
for (let i = 0; i < 1_500_000; i++) {
|
||||
large_audio_data[i] = Math.floor(Math.random() * 255);
|
||||
}
|
||||
|
||||
// Video data (small binary - direct transport)
|
||||
const video_data = new Uint8Array(150);
|
||||
for (let i = 0; i < 150; i++) {
|
||||
video_data[i] = Math.floor(Math.random() * 255);
|
||||
}
|
||||
|
||||
// Video data (large - link transport)
|
||||
const large_video_data = new Uint8Array(1_500_000);
|
||||
for (let i = 0; i < 1_500_000; i++) {
|
||||
large_video_data[i] = Math.floor(Math.random() * 255);
|
||||
}
|
||||
|
||||
// Binary data (small - direct transport)
|
||||
const binary_data = new Uint8Array(200);
|
||||
for (let i = 0; i < 200; i++) {
|
||||
binary_data[i] = Math.floor(Math.random() * 255);
|
||||
}
|
||||
|
||||
// Binary data (large - link transport)
|
||||
const large_binary_data = new Uint8Array(1_500_000);
|
||||
for (let i = 0; i < 1_500_000; i++) {
|
||||
large_binary_data[i] = Math.floor(Math.random() * 255);
|
||||
}
|
||||
|
||||
return {
|
||||
text_data,
|
||||
dict_data,
|
||||
// table_data_small,
|
||||
// table_data_large,
|
||||
image_data,
|
||||
large_image_data,
|
||||
audio_data,
|
||||
large_audio_data,
|
||||
video_data,
|
||||
large_video_data,
|
||||
binary_data,
|
||||
large_binary_data
|
||||
};
|
||||
}
|
||||
|
||||
// Sender: Send mixed content via smartsend
|
||||
async function test_mix_send() {
|
||||
// Create sample data
|
||||
const { text_data, dict_data, image_data, large_image_data, audio_data, large_audio_data, video_data, large_video_data, binary_data, large_binary_data } = create_sample_data();
|
||||
|
||||
// Create payloads list - mixed content with both small and large data
|
||||
// Small data uses direct transport, large data uses link transport
|
||||
const payloads = [
|
||||
// Small data (direct transport) - text, dictionary
|
||||
{ dataname: "chat_text", data: text_data, type: "text" },
|
||||
{ dataname: "chat_json", data: dict_data, type: "dictionary" },
|
||||
// { dataname: "chat_table_small", data: table_data_small, type: "table" },
|
||||
|
||||
// Large data (link transport) - large image, large audio, large video, large binary
|
||||
// { dataname: "chat_table_large", data: table_data_large, type: "table" },
|
||||
{ dataname: "user_image_large", data: large_image_data, type: "image" },
|
||||
{ dataname: "audio_clip_large", data: large_audio_data, type: "audio" },
|
||||
{ dataname: "video_clip_large", data: large_video_data, type: "video" },
|
||||
{ dataname: "binary_file_large", data: large_binary_data, type: "binary" }
|
||||
];
|
||||
|
||||
// Use smartsend with mixed content
|
||||
const env = await smartsend(
|
||||
SUBJECT,
|
||||
payloads,
|
||||
{
|
||||
natsUrl: NATS_URL,
|
||||
fileserverUrl: FILESERVER_URL,
|
||||
fileserverUploadHandler: plik_upload_handler,
|
||||
sizeThreshold: 1_000_000,
|
||||
correlationId: correlation_id,
|
||||
msgPurpose: "chat",
|
||||
senderName: "mix_sender",
|
||||
receiverName: "",
|
||||
receiverId: "",
|
||||
replyTo: "",
|
||||
replyToMsgId: ""
|
||||
}
|
||||
);
|
||||
|
||||
log_trace(`Sent message with ${env.payloads.length} payloads`);
|
||||
|
||||
// Log transport type for each payload
|
||||
for (let i = 0; i < env.payloads.length; i++) {
|
||||
const payload = env.payloads[i];
|
||||
log_trace(`Payload ${i + 1} ('${payload.dataname}'):`);
|
||||
log_trace(` Transport: ${payload.transport}`);
|
||||
log_trace(` Type: ${payload.type}`);
|
||||
log_trace(` Size: ${payload.size} bytes`);
|
||||
log_trace(` Encoding: ${payload.encoding}`);
|
||||
|
||||
if (payload.transport === "link") {
|
||||
log_trace(` URL: ${payload.data}`);
|
||||
}
|
||||
}
|
||||
|
||||
// Summary
|
||||
console.log("\n--- Transport Summary ---");
|
||||
const direct_count = env.payloads.filter(p => p.transport === "direct").length;
|
||||
const link_count = env.payloads.filter(p => p.transport === "link").length;
|
||||
log_trace(`Direct transport: ${direct_count} payloads`);
|
||||
log_trace(`Link transport: ${link_count} payloads`);
|
||||
}
|
||||
|
||||
// Run the test
|
||||
console.log("Starting mixed-content transport test...");
|
||||
console.log(`Correlation ID: ${correlation_id}`);
|
||||
|
||||
// Run sender
|
||||
console.log("start smartsend for mixed content");
|
||||
test_mix_send();
|
||||
|
||||
console.log("\nTest completed.");
|
||||
console.log("Note: Run test_js_to_js_mix_receiver.js to receive the messages.");
|
||||
172
test/test_js_to_js_mix_payloads_receiver.js
Normal file
172
test/test_js_to_js_mix_payloads_receiver.js
Normal file
@@ -0,0 +1,172 @@
|
||||
#!/usr/bin/env node
|
||||
// Test script for mixed-content message testing
|
||||
// Tests receiving a mix of text, json, table, image, audio, video, and binary data
|
||||
// from JavaScript serviceA to JavaScript serviceB using NATSBridge.js smartreceive
|
||||
//
|
||||
// This test demonstrates that any combination and any number of mixed content
|
||||
// can be sent and received correctly.
|
||||
|
||||
const { smartreceive, log_trace } = require('./src/NATSBridge');
|
||||
|
||||
// Configuration
|
||||
const SUBJECT = "/NATSBridge_mix_test";
|
||||
const NATS_URL = "nats.yiem.cc";
|
||||
|
||||
// Helper: Log with correlation ID
|
||||
function log_trace(message) {
|
||||
const timestamp = new Date().toISOString();
|
||||
console.log(`[${timestamp}] ${message}`);
|
||||
}
|
||||
|
||||
// Receiver: Listen for messages and verify mixed content handling
|
||||
async function test_mix_receive() {
|
||||
// Connect to NATS
|
||||
const { connect } = require('nats');
|
||||
const nc = await connect({ servers: [NATS_URL] });
|
||||
|
||||
// Subscribe to the subject
|
||||
const sub = nc.subscribe(SUBJECT);
|
||||
|
||||
for await (const msg of sub) {
|
||||
log_trace(`Received message on ${msg.subject}`);
|
||||
|
||||
// Use NATSBridge.smartreceive to handle the data
|
||||
const result = await smartreceive(
|
||||
msg,
|
||||
{
|
||||
maxRetries: 5,
|
||||
baseDelay: 100,
|
||||
maxDelay: 5000
|
||||
}
|
||||
);
|
||||
|
||||
log_trace(`Received ${result.length} payloads`);
|
||||
|
||||
// Result is a list of {dataname, data, type} objects
|
||||
for (const { dataname, data, type } of result) {
|
||||
log_trace(`\n=== Payload: ${dataname} (type: ${type}) ===`);
|
||||
|
||||
// Handle different data types
|
||||
if (type === "text") {
|
||||
// Text data - should be a String
|
||||
if (typeof data === 'string') {
|
||||
log_trace(` Type: String`);
|
||||
log_trace(` Length: ${data.length} characters`);
|
||||
|
||||
// Display first 200 characters
|
||||
if (data.length > 200) {
|
||||
log_trace(` First 200 chars: ${data.substring(0, 200)}...`);
|
||||
} else {
|
||||
log_trace(` Content: ${data}`);
|
||||
}
|
||||
|
||||
// Save to file
|
||||
const fs = require('fs');
|
||||
const output_path = `./received_${dataname}.txt`;
|
||||
fs.writeFileSync(output_path, data);
|
||||
log_trace(` Saved to: ${output_path}`);
|
||||
} else {
|
||||
log_trace(` ERROR: Expected String, got ${typeof data}`);
|
||||
}
|
||||
|
||||
} else if (type === "dictionary") {
|
||||
// Dictionary data - should be an object
|
||||
if (typeof data === 'object' && data !== null && !Array.isArray(data)) {
|
||||
log_trace(` Type: Object`);
|
||||
log_trace(` Keys: ${Object.keys(data).join(', ')}`);
|
||||
|
||||
// Display nested content
|
||||
for (const [key, value] of Object.entries(data)) {
|
||||
log_trace(` ${key} => ${value}`);
|
||||
}
|
||||
|
||||
// Save to JSON file
|
||||
const fs = require('fs');
|
||||
const output_path = `./received_${dataname}.json`;
|
||||
const json_str = JSON.stringify(data, null, 2);
|
||||
fs.writeFileSync(output_path, json_str);
|
||||
log_trace(` Saved to: ${output_path}`);
|
||||
} else {
|
||||
log_trace(` ERROR: Expected Object, got ${typeof data}`);
|
||||
}
|
||||
|
||||
} else if (type === "table") {
|
||||
// Table data - should be an array of objects (requires apache-arrow)
|
||||
log_trace(` Type: Array (requires apache-arrow for full deserialization)`);
|
||||
if (Array.isArray(data)) {
|
||||
log_trace(` Length: ${data.length} items`);
|
||||
log_trace(` First item: ${JSON.stringify(data[0])}`);
|
||||
} else {
|
||||
log_trace(` ERROR: Expected Array, got ${typeof data}`);
|
||||
}
|
||||
|
||||
} else if (type === "image" || type === "audio" || type === "video" || type === "binary") {
|
||||
// Binary data - should be Uint8Array
|
||||
if (data instanceof Uint8Array || Array.isArray(data)) {
|
||||
log_trace(` Type: Uint8Array (binary)`);
|
||||
log_trace(` Size: ${data.length} bytes`);
|
||||
|
||||
// Save to file
|
||||
const fs = require('fs');
|
||||
const output_path = `./received_${dataname}.bin`;
|
||||
fs.writeFileSync(output_path, Buffer.from(data));
|
||||
log_trace(` Saved to: ${output_path}`);
|
||||
} else {
|
||||
log_trace(` ERROR: Expected Uint8Array, got ${typeof data}`);
|
||||
}
|
||||
|
||||
} else {
|
||||
log_trace(` ERROR: Unknown data type '${type}'`);
|
||||
}
|
||||
}
|
||||
|
||||
// Summary
|
||||
console.log("\n=== Verification Summary ===");
|
||||
const text_count = result.filter(x => x.type === "text").length;
|
||||
const dict_count = result.filter(x => x.type === "dictionary").length;
|
||||
const table_count = result.filter(x => x.type === "table").length;
|
||||
const image_count = result.filter(x => x.type === "image").length;
|
||||
const audio_count = result.filter(x => x.type === "audio").length;
|
||||
const video_count = result.filter(x => x.type === "video").length;
|
||||
const binary_count = result.filter(x => x.type === "binary").length;
|
||||
|
||||
log_trace(`Text payloads: ${text_count}`);
|
||||
log_trace(`Dictionary payloads: ${dict_count}`);
|
||||
log_trace(`Table payloads: ${table_count}`);
|
||||
log_trace(`Image payloads: ${image_count}`);
|
||||
log_trace(`Audio payloads: ${audio_count}`);
|
||||
log_trace(`Video payloads: ${video_count}`);
|
||||
log_trace(`Binary payloads: ${binary_count}`);
|
||||
|
||||
// Print transport type info for each payload if available
|
||||
console.log("\n=== Payload Details ===");
|
||||
for (const { dataname, data, type } of result) {
|
||||
if (["image", "audio", "video", "binary"].includes(type)) {
|
||||
log_trace(`${dataname}: ${data.length} bytes (binary)`);
|
||||
} else if (type === "table") {
|
||||
log_trace(`${dataname}: ${data.length} items (Array)`);
|
||||
} else if (type === "dictionary") {
|
||||
log_trace(`${dataname}: ${JSON.stringify(data).length} bytes (Object)`);
|
||||
} else if (type === "text") {
|
||||
log_trace(`${dataname}: ${data.length} characters (String)`);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Keep listening for 2 minutes
|
||||
setTimeout(() => {
|
||||
nc.close();
|
||||
process.exit(0);
|
||||
}, 120000);
|
||||
}
|
||||
|
||||
// Run the test
|
||||
console.log("Starting mixed-content transport test...");
|
||||
console.log("Note: This receiver will wait for messages from the sender.");
|
||||
console.log("Run test_js_to_js_mix_sender.js first to send test data.");
|
||||
|
||||
// Run receiver
|
||||
console.log("\ntesting smartreceive for mixed content");
|
||||
test_mix_receive();
|
||||
|
||||
console.log("\nTest completed.");
|
||||
86
test/test_js_to_js_table_receiver.js
Normal file
86
test/test_js_to_js_table_receiver.js
Normal file
@@ -0,0 +1,86 @@
|
||||
#!/usr/bin/env node
|
||||
// Test script for Table transport testing
|
||||
// Tests receiving 1 large and 1 small Tables via direct and link transport
|
||||
// Uses NATSBridge.js smartreceive with "table" type
|
||||
//
|
||||
// Note: This test requires the apache-arrow library to deserialize table data.
|
||||
// The JavaScript implementation uses apache-arrow for Arrow IPC deserialization.
|
||||
|
||||
const { smartreceive, log_trace } = require('./src/NATSBridge');
|
||||
|
||||
// Configuration
|
||||
const SUBJECT = "/NATSBridge_table_test";
|
||||
const NATS_URL = "nats.yiem.cc";
|
||||
|
||||
// Helper: Log with correlation ID
|
||||
function log_trace(message) {
|
||||
const timestamp = new Date().toISOString();
|
||||
console.log(`[${timestamp}] ${message}`);
|
||||
}
|
||||
|
||||
// Receiver: Listen for messages and verify Table handling
|
||||
async function test_table_receive() {
|
||||
// Connect to NATS
|
||||
const { connect } = require('nats');
|
||||
const nc = await connect({ servers: [NATS_URL] });
|
||||
|
||||
// Subscribe to the subject
|
||||
const sub = nc.subscribe(SUBJECT);
|
||||
|
||||
for await (const msg of sub) {
|
||||
log_trace(`Received message on ${msg.subject}`);
|
||||
|
||||
// Use NATSBridge.smartreceive to handle the data
|
||||
const result = await smartreceive(
|
||||
msg,
|
||||
{
|
||||
maxRetries: 5,
|
||||
baseDelay: 100,
|
||||
maxDelay: 5000
|
||||
}
|
||||
);
|
||||
|
||||
// Result is a list of {dataname, data, type} objects
|
||||
for (const { dataname, data, type } of result) {
|
||||
if (Array.isArray(data)) {
|
||||
log_trace(`Received Table '${dataname}' of type ${type}`);
|
||||
|
||||
// Display table contents
|
||||
console.log(` Dimensions: ${data.length} rows x ${data.length > 0 ? Object.keys(data[0]).length : 0} columns`);
|
||||
console.log(` Columns: ${data.length > 0 ? Object.keys(data[0]).join(', ') : ''}`);
|
||||
|
||||
// Display first few rows
|
||||
console.log(` First 5 rows:`);
|
||||
for (let i = 0; i < Math.min(5, data.length); i++) {
|
||||
console.log(` Row ${i}: ${JSON.stringify(data[i])}`);
|
||||
}
|
||||
|
||||
// Save to JSON file
|
||||
const fs = require('fs');
|
||||
const output_path = `./received_${dataname}.json`;
|
||||
const json_str = JSON.stringify(data, null, 2);
|
||||
fs.writeFileSync(output_path, json_str);
|
||||
log_trace(`Saved Table to ${output_path}`);
|
||||
} else {
|
||||
log_trace(`Received unexpected data type for '${dataname}': ${typeof data}`);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Keep listening for 10 seconds
|
||||
setTimeout(() => {
|
||||
nc.close();
|
||||
process.exit(0);
|
||||
}, 120000);
|
||||
}
|
||||
|
||||
// Run the test
|
||||
console.log("Starting Table transport test...");
|
||||
console.log("Note: This receiver will wait for messages from the sender.");
|
||||
console.log("Run test_js_to_js_table_sender.js first to send test data.");
|
||||
|
||||
// Run receiver
|
||||
console.log("testing smartreceive");
|
||||
test_table_receive();
|
||||
|
||||
console.log("Test completed.");
|
||||
164
test/test_js_to_js_table_sender.js
Normal file
164
test/test_js_to_js_table_sender.js
Normal file
@@ -0,0 +1,164 @@
|
||||
#!/usr/bin/env node
|
||||
// Test script for Table transport testing
|
||||
// Tests sending 1 large and 1 small Tables via direct and link transport
|
||||
// Uses NATSBridge.js smartsend with "table" type
|
||||
//
|
||||
// Note: This test requires the apache-arrow library to serialize/deserialize table data.
|
||||
// The JavaScript implementation uses apache-arrow for Arrow IPC serialization.
|
||||
|
||||
const { smartsend, uuid4, log_trace } = require('./src/NATSBridge');
|
||||
|
||||
// Configuration
|
||||
const SUBJECT = "/NATSBridge_table_test";
|
||||
const NATS_URL = "nats.yiem.cc";
|
||||
const FILESERVER_URL = "http://192.168.88.104:8080";
|
||||
|
||||
// Create correlation ID for tracing
|
||||
const correlation_id = uuid4();
|
||||
|
||||
// Helper: Log with correlation ID
|
||||
function log_trace(message) {
|
||||
const timestamp = new Date().toISOString();
|
||||
console.log(`[${timestamp}] [Correlation: ${correlation_id}] ${message}`);
|
||||
}
|
||||
|
||||
// File upload handler for plik server
|
||||
async function plik_upload_handler(fileserver_url, dataname, data, correlation_id) {
|
||||
log_trace(correlation_id, `Uploading ${dataname} to fileserver: ${fileserver_url}`);
|
||||
|
||||
// Step 1: Get upload ID and token
|
||||
const url_getUploadID = `${fileserver_url}/upload`;
|
||||
const headers = {
|
||||
"Content-Type": "application/json"
|
||||
};
|
||||
const body = JSON.stringify({ OneShot: true });
|
||||
|
||||
let response = await fetch(url_getUploadID, {
|
||||
method: "POST",
|
||||
headers: headers,
|
||||
body: body
|
||||
});
|
||||
|
||||
if (!response.ok) {
|
||||
throw new Error(`Failed to get upload ID: ${response.status} ${response.statusText}`);
|
||||
}
|
||||
|
||||
const responseJson = await response.json();
|
||||
const uploadid = responseJson.id;
|
||||
const uploadtoken = responseJson.uploadToken;
|
||||
|
||||
// Step 2: Upload file data
|
||||
const url_upload = `${fileserver_url}/file/${uploadid}`;
|
||||
|
||||
// Create multipart form data
|
||||
const formData = new FormData();
|
||||
const blob = new Blob([data], { type: "application/octet-stream" });
|
||||
formData.append("file", blob, dataname);
|
||||
|
||||
response = await fetch(url_upload, {
|
||||
method: "POST",
|
||||
headers: {
|
||||
"X-UploadToken": uploadtoken
|
||||
},
|
||||
body: formData
|
||||
});
|
||||
|
||||
if (!response.ok) {
|
||||
throw new Error(`Failed to upload file: ${response.status} ${response.statusText}`);
|
||||
}
|
||||
|
||||
const fileResponseJson = await response.json();
|
||||
const fileid = fileResponseJson.id;
|
||||
|
||||
// Build the download URL
|
||||
const url = `${fileserver_url}/file/${uploadid}/${fileid}/${encodeURIComponent(dataname)}`;
|
||||
|
||||
log_trace(correlation_id, `Uploaded to URL: ${url}`);
|
||||
|
||||
return {
|
||||
status: response.status,
|
||||
uploadid: uploadid,
|
||||
fileid: fileid,
|
||||
url: url
|
||||
};
|
||||
}
|
||||
|
||||
// Sender: Send Tables via smartsend
|
||||
async function test_table_send() {
|
||||
// Note: This test requires apache-arrow library to create Arrow IPC data.
|
||||
// For now, we'll use a simple array of objects as table data.
|
||||
// In production, you would use the apache-arrow library to create Arrow IPC data.
|
||||
|
||||
// Create a small Table (will use direct transport)
|
||||
const small_table = [
|
||||
{ id: 1, name: "Alice", score: 95 },
|
||||
{ id: 2, name: "Bob", score: 88 },
|
||||
{ id: 3, name: "Charlie", score: 92 }
|
||||
];
|
||||
|
||||
// Create a large Table (will use link transport if > 1MB)
|
||||
// Generate a larger dataset (~2MB to ensure link transport)
|
||||
const large_table = [];
|
||||
for (let i = 0; i < 50000; i++) {
|
||||
large_table.push({
|
||||
id: i,
|
||||
message: `msg_${i}`,
|
||||
sender: `sender_${i}`,
|
||||
timestamp: new Date().toISOString(),
|
||||
priority: Math.floor(Math.random() * 3) + 1
|
||||
});
|
||||
}
|
||||
|
||||
// Test data 1: small Table
|
||||
const data1 = { dataname: "small_table", data: small_table, type: "table" };
|
||||
|
||||
// Test data 2: large Table
|
||||
const data2 = { dataname: "large_table", data: large_table, type: "table" };
|
||||
|
||||
// Use smartsend with table type
|
||||
// For small Table: will use direct transport (Arrow IPC encoded)
|
||||
// For large Table: will use link transport (uploaded to fileserver)
|
||||
const env = await smartsend(
|
||||
SUBJECT,
|
||||
[data1, data2],
|
||||
{
|
||||
natsUrl: NATS_URL,
|
||||
fileserverUrl: FILESERVER_URL,
|
||||
fileserverUploadHandler: plik_upload_handler,
|
||||
sizeThreshold: 1_000_000,
|
||||
correlationId: correlation_id,
|
||||
msgPurpose: "chat",
|
||||
senderName: "table_sender",
|
||||
receiverName: "",
|
||||
receiverId: "",
|
||||
replyTo: "",
|
||||
replyToMsgId: ""
|
||||
}
|
||||
);
|
||||
|
||||
log_trace(`Sent message with ${env.payloads.length} payloads`);
|
||||
|
||||
// Log transport type for each payload
|
||||
for (let i = 0; i < env.payloads.length; i++) {
|
||||
const payload = env.payloads[i];
|
||||
log_trace(`Payload ${i + 1} ('${payload.dataname}'):`);
|
||||
log_trace(` Transport: ${payload.transport}`);
|
||||
log_trace(` Type: ${payload.type}`);
|
||||
log_trace(` Size: ${payload.size} bytes`);
|
||||
log_trace(` Encoding: ${payload.encoding}`);
|
||||
|
||||
if (payload.transport === "link") {
|
||||
log_trace(` URL: ${payload.data}`);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Run the test
|
||||
console.log("Starting Table transport test...");
|
||||
console.log(`Correlation ID: ${correlation_id}`);
|
||||
|
||||
// Run sender
|
||||
console.log("start smartsend for tables");
|
||||
test_table_send();
|
||||
|
||||
console.log("Test completed.");
|
||||
80
test/test_js_to_js_text_receiver.js
Normal file
80
test/test_js_to_js_text_receiver.js
Normal file
@@ -0,0 +1,80 @@
|
||||
#!/usr/bin/env node
|
||||
// Test script for text transport testing
|
||||
// Tests receiving 1 large and 1 small text from JavaScript serviceA to JavaScript serviceB
|
||||
// Uses NATSBridge.js smartreceive with "text" type
|
||||
|
||||
const { smartreceive, log_trace } = require('./src/NATSBridge');
|
||||
|
||||
// Configuration
|
||||
const SUBJECT = "/NATSBridge_text_test";
|
||||
const NATS_URL = "nats.yiem.cc";
|
||||
|
||||
// Helper: Log with correlation ID
|
||||
function log_trace(message) {
|
||||
const timestamp = new Date().toISOString();
|
||||
console.log(`[${timestamp}] ${message}`);
|
||||
}
|
||||
|
||||
// Receiver: Listen for messages and verify text handling
|
||||
async function test_text_receive() {
|
||||
// Connect to NATS
|
||||
const { connect } = require('nats');
|
||||
const nc = await connect({ servers: [NATS_URL] });
|
||||
|
||||
// Subscribe to the subject
|
||||
const sub = nc.subscribe(SUBJECT);
|
||||
|
||||
for await (const msg of sub) {
|
||||
log_trace(`Received message on ${msg.subject}`);
|
||||
|
||||
// Use NATSBridge.smartreceive to handle the data
|
||||
const result = await smartreceive(
|
||||
msg,
|
||||
{
|
||||
maxRetries: 5,
|
||||
baseDelay: 100,
|
||||
maxDelay: 5000
|
||||
}
|
||||
);
|
||||
|
||||
// Result is a list of {dataname, data, type} objects
|
||||
for (const { dataname, data, type } of result) {
|
||||
if (typeof data === 'string') {
|
||||
log_trace(`Received text '${dataname}' of type ${type}`);
|
||||
log_trace(` Length: ${data.length} characters`);
|
||||
|
||||
// Display first 100 characters
|
||||
if (data.length > 100) {
|
||||
log_trace(` First 100 characters: ${data.substring(0, 100)}...`);
|
||||
} else {
|
||||
log_trace(` Content: ${data}`);
|
||||
}
|
||||
|
||||
// Save to file
|
||||
const fs = require('fs');
|
||||
const output_path = `./received_${dataname}.txt`;
|
||||
fs.writeFileSync(output_path, data);
|
||||
log_trace(`Saved text to ${output_path}`);
|
||||
} else {
|
||||
log_trace(`Received unexpected data type for '${dataname}': ${typeof data}`);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Keep listening for 10 seconds
|
||||
setTimeout(() => {
|
||||
nc.close();
|
||||
process.exit(0);
|
||||
}, 120000);
|
||||
}
|
||||
|
||||
// Run the test
|
||||
console.log("Starting text transport test...");
|
||||
console.log("Note: This receiver will wait for messages from the sender.");
|
||||
console.log("Run test_js_to_js_text_sender.js first to send test data.");
|
||||
|
||||
// Run receiver
|
||||
console.log("testing smartreceive for text");
|
||||
test_text_receive();
|
||||
|
||||
console.log("Test completed.");
|
||||
140
test/test_js_to_js_text_sender.js
Normal file
140
test/test_js_to_js_text_sender.js
Normal file
@@ -0,0 +1,140 @@
|
||||
#!/usr/bin/env node
|
||||
// Test script for text transport testing
|
||||
// Tests sending 1 large and 1 small text from JavaScript serviceA to JavaScript serviceB
|
||||
// Uses NATSBridge.js smartsend with "text" type
|
||||
|
||||
const { smartsend, uuid4, log_trace } = require('./src/NATSBridge');
|
||||
|
||||
// Configuration
|
||||
const SUBJECT = "/NATSBridge_text_test";
|
||||
const NATS_URL = "nats.yiem.cc";
|
||||
const FILESERVER_URL = "http://192.168.88.104:8080";
|
||||
|
||||
// Create correlation ID for tracing
|
||||
const correlation_id = uuid4();
|
||||
|
||||
// Helper: Log with correlation ID
|
||||
function log_trace(message) {
|
||||
const timestamp = new Date().toISOString();
|
||||
console.log(`[${timestamp}] [Correlation: ${correlation_id}] ${message}`);
|
||||
}
|
||||
|
||||
// File upload handler for plik server
|
||||
async function plik_upload_handler(fileserver_url, dataname, data, correlation_id) {
|
||||
// Get upload ID
|
||||
const url_getUploadID = `${fileserver_url}/upload`;
|
||||
const headers = {
|
||||
"Content-Type": "application/json"
|
||||
};
|
||||
const body = JSON.stringify({ OneShot: true });
|
||||
|
||||
let response = await fetch(url_getUploadID, {
|
||||
method: "POST",
|
||||
headers: headers,
|
||||
body: body
|
||||
});
|
||||
|
||||
if (!response.ok) {
|
||||
throw new Error(`Failed to get upload ID: ${response.status} ${response.statusText}`);
|
||||
}
|
||||
|
||||
const responseJson = await response.json();
|
||||
const uploadid = responseJson.id;
|
||||
const uploadtoken = responseJson.uploadToken;
|
||||
|
||||
// Upload file
|
||||
const formData = new FormData();
|
||||
const blob = new Blob([data], { type: "application/octet-stream" });
|
||||
formData.append("file", blob, dataname);
|
||||
|
||||
response = await fetch(`${fileserver_url}/file/${uploadid}`, {
|
||||
method: "POST",
|
||||
headers: {
|
||||
"X-UploadToken": uploadtoken
|
||||
},
|
||||
body: formData
|
||||
});
|
||||
|
||||
if (!response.ok) {
|
||||
throw new Error(`Failed to upload file: ${response.status} ${response.statusText}`);
|
||||
}
|
||||
|
||||
const fileResponseJson = await response.json();
|
||||
const fileid = fileResponseJson.id;
|
||||
|
||||
const url = `${fileserver_url}/file/${uploadid}/${fileid}/${encodeURIComponent(dataname)}`;
|
||||
|
||||
return {
|
||||
status: response.status,
|
||||
uploadid: uploadid,
|
||||
fileid: fileid,
|
||||
url: url
|
||||
};
|
||||
}
|
||||
|
||||
// Sender: Send text via smartsend
|
||||
async function test_text_send() {
|
||||
// Create a small text (will use direct transport)
|
||||
const small_text = "Hello, this is a small text message. Testing direct transport via NATS.";
|
||||
|
||||
// Create a large text (will use link transport if > 1MB)
|
||||
// Generate a larger text (~2MB to ensure link transport)
|
||||
const large_text_lines = [];
|
||||
for (let i = 0; i < 50000; i++) {
|
||||
large_text_lines.push(`Line ${i}: This is a sample text line with some content to pad the size. `);
|
||||
}
|
||||
const large_text = large_text_lines.join("");
|
||||
|
||||
// Test data 1: small text
|
||||
const data1 = { dataname: "small_text", data: small_text, type: "text" };
|
||||
|
||||
// Test data 2: large text
|
||||
const data2 = { dataname: "large_text", data: large_text, type: "text" };
|
||||
|
||||
// Use smartsend with text type
|
||||
// For small text: will use direct transport (Base64 encoded UTF-8)
|
||||
// For large text: will use link transport (uploaded to fileserver)
|
||||
const env = await smartsend(
|
||||
SUBJECT,
|
||||
[data1, data2],
|
||||
{
|
||||
natsUrl: NATS_URL,
|
||||
fileserverUrl: FILESERVER_URL,
|
||||
fileserverUploadHandler: plik_upload_handler,
|
||||
sizeThreshold: 1_000_000,
|
||||
correlationId: correlation_id,
|
||||
msgPurpose: "chat",
|
||||
senderName: "text_sender",
|
||||
receiverName: "",
|
||||
receiverId: "",
|
||||
replyTo: "",
|
||||
replyToMsgId: ""
|
||||
}
|
||||
);
|
||||
|
||||
log_trace(`Sent message with ${env.payloads.length} payloads`);
|
||||
|
||||
// Log transport type for each payload
|
||||
for (let i = 0; i < env.payloads.length; i++) {
|
||||
const payload = env.payloads[i];
|
||||
log_trace(`Payload ${i + 1} ('${payload.dataname}'):`);
|
||||
log_trace(` Transport: ${payload.transport}`);
|
||||
log_trace(` Type: ${payload.type}`);
|
||||
log_trace(` Size: ${payload.size} bytes`);
|
||||
log_trace(` Encoding: ${payload.encoding}`);
|
||||
|
||||
if (payload.transport === "link") {
|
||||
log_trace(` URL: ${payload.data}`);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Run the test
|
||||
console.log("Starting text transport test...");
|
||||
console.log(`Correlation ID: ${correlation_id}`);
|
||||
|
||||
// Run sender
|
||||
console.log("start smartsend for text");
|
||||
test_text_send();
|
||||
|
||||
console.log("Test completed.");
|
||||
@@ -1,190 +0,0 @@
|
||||
#!/usr/bin/env julia
|
||||
# Test script for large payload testing using binary transport
|
||||
# Tests sending a large file (> 1MB) via smartsend with binary type
|
||||
|
||||
using NATS, JSON, UUIDs, Dates
|
||||
|
||||
# Include the bridge module
|
||||
include("../src/NATSBridge.jl")
|
||||
using .NATSBridge
|
||||
|
||||
# Configuration
|
||||
const SUBJECT = "/large_binary_test"
|
||||
const NATS_URL = "nats.yiem.cc"
|
||||
const FILESERVER_URL = "http://192.168.88.104:8080"
|
||||
|
||||
# Create correlation ID for tracing
|
||||
correlation_id = string(uuid4())
|
||||
|
||||
|
||||
# ------------------------------------------------------------------------------------------------ #
|
||||
# test file transfer #
|
||||
# ------------------------------------------------------------------------------------------------ #
|
||||
|
||||
# File path for large binary payload test
|
||||
const FILE_PATH = "./testFile_small.zip"
|
||||
const filename = basename(FILE_PATH)
|
||||
|
||||
# Helper: Log with correlation ID
|
||||
function log_trace(message)
|
||||
timestamp = Dates.now()
|
||||
println("[$timestamp] [Correlation: $correlation_id] $message")
|
||||
end
|
||||
|
||||
# Sender: Send large binary file via smartsend
|
||||
function test_large_binary_send()
|
||||
conn = NATS.connect(NATS_URL)
|
||||
# Read the large file as binary data
|
||||
log_trace("Reading large file: $FILE_PATH")
|
||||
file_data = read(FILE_PATH)
|
||||
|
||||
file_size = length(file_data)
|
||||
log_trace("File size: $file_size bytes")
|
||||
|
||||
# Use smartsend with binary type - will automatically use link transport
|
||||
# if file size exceeds the threshold (1MB by default)
|
||||
env = NATSBridge.smartsend(
|
||||
SUBJECT,
|
||||
file_data,
|
||||
"binary",
|
||||
nats_url = NATS_URL,
|
||||
fileserver_url = FILESERVER_URL;
|
||||
dataname=filename
|
||||
)
|
||||
|
||||
log_trace("Sent message with transport: $(env.transport)")
|
||||
log_trace("Envelope type: $(env.type)")
|
||||
|
||||
# Check if link transport was used
|
||||
if env.transport == "link"
|
||||
log_trace("Using link transport - file uploaded to HTTP server")
|
||||
log_trace("URL: $(env.url)")
|
||||
else
|
||||
log_trace("Using direct transport - payload sent via NATS")
|
||||
end
|
||||
|
||||
NATS.drain(conn)
|
||||
end
|
||||
|
||||
# Receiver: Listen for messages and verify large payload handling
|
||||
function test_large_binary_receive()
|
||||
conn = NATS.connect(NATS_URL)
|
||||
NATS.subscribe(conn, SUBJECT) do msg
|
||||
log_trace("Received message on $(msg.subject)")
|
||||
|
||||
# Use NATSBridge.smartreceive to handle the data
|
||||
result = NATSBridge.smartreceive(msg)
|
||||
# Check transport type
|
||||
if result.envelope.transport == "direct"
|
||||
log_trace("Received direct transport")
|
||||
else
|
||||
# For link transport, result.data is the URL
|
||||
log_trace("Received link transport")
|
||||
end
|
||||
|
||||
# Verify the received data matches the original
|
||||
if result.envelope.type == "binary"
|
||||
if isa(result.data, Vector{UInt8})
|
||||
file_size = length(result.data)
|
||||
log_trace("Received $(file_size) bytes of binary data")
|
||||
|
||||
# Save received data to a test file
|
||||
println("metadata ", result.envelope.metadata)
|
||||
dataname = result.envelope.metadata["dataname"]
|
||||
if dataname != "NA"
|
||||
output_path = "./new_$dataname"
|
||||
write(output_path, result.data)
|
||||
log_trace("Saved received data to $output_path")
|
||||
end
|
||||
|
||||
# Verify file size
|
||||
original_size = length(read(FILE_PATH))
|
||||
if file_size == result.envelope.metadata["content_length"]
|
||||
log_trace("SUCCESS: File size matches! Original: $(result.envelope.metadata["content_length"]) bytes")
|
||||
else
|
||||
log_trace("WARNING: File size mismatch! Original: $(result.envelope.metadata["content_length"]), Received: $file_size")
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
# Keep listening for 10 seconds
|
||||
sleep(120)
|
||||
NATS.drain(conn)
|
||||
end
|
||||
|
||||
|
||||
# Run the test
|
||||
println("Starting large binary payload test...")
|
||||
println("Correlation ID: $correlation_id")
|
||||
println("File: $FILE_PATH")
|
||||
|
||||
# Run sender first
|
||||
println("start smartsend")
|
||||
test_large_binary_send()
|
||||
|
||||
# # Run receiver
|
||||
# println("testing smartreceive")
|
||||
# test_large_binary_receive()
|
||||
|
||||
println("Test completed.")
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
82
test/test_julia_to_julia_dict_receiver.jl
Normal file
82
test/test_julia_to_julia_dict_receiver.jl
Normal file
@@ -0,0 +1,82 @@
|
||||
#!/usr/bin/env julia
|
||||
# Test script for Dictionary transport testing
|
||||
# Tests receiving 1 large and 1 small Dictionaries via direct and link transport
|
||||
# Uses NATSBridge.jl smartreceive with "dictionary" type
|
||||
|
||||
using NATS, JSON, UUIDs, Dates, PrettyPrinting, DataFrames, Arrow, HTTP
|
||||
|
||||
# Include the bridge module
|
||||
include("../src/NATSBridge.jl")
|
||||
using .NATSBridge
|
||||
|
||||
# Configuration
|
||||
const SUBJECT = "/NATSBridge_dict_test"
|
||||
const NATS_URL = "nats.yiem.cc"
|
||||
const FILESERVER_URL = "http://192.168.88.104:8080"
|
||||
|
||||
|
||||
# ------------------------------------------------------------------------------------------------ #
|
||||
# test dictionary transfer #
|
||||
# ------------------------------------------------------------------------------------------------ #
|
||||
|
||||
|
||||
# Helper: Log with correlation ID
|
||||
function log_trace(message)
|
||||
timestamp = Dates.now()
|
||||
println("[$timestamp] $message")
|
||||
end
|
||||
|
||||
|
||||
# Receiver: Listen for messages and verify Dictionary handling
|
||||
function test_dict_receive()
|
||||
conn = NATS.connect(NATS_URL)
|
||||
NATS.subscribe(conn, SUBJECT) do msg
|
||||
log_trace("Received message on $(msg.subject)")
|
||||
|
||||
# Use NATSBridge.smartreceive to handle the data
|
||||
# API: smartreceive(msg, download_handler; max_retries, base_delay, max_delay)
|
||||
result = NATSBridge.smartreceive(
|
||||
msg;
|
||||
max_retries = 5,
|
||||
base_delay = 100,
|
||||
max_delay = 5000
|
||||
)
|
||||
|
||||
# Result is a list of (dataname, data, data_type) tuples
|
||||
for (dataname, data, data_type) in result
|
||||
if isa(data, JSON.Object{String, Any})
|
||||
log_trace("Received Dictionary '$dataname' of type $data_type")
|
||||
|
||||
# Display dictionary contents
|
||||
println(" Contents:")
|
||||
for (key, value) in data
|
||||
println(" $key => $value")
|
||||
end
|
||||
|
||||
# Save to JSON file
|
||||
output_path = "./received_$dataname.json"
|
||||
json_str = JSON.json(data, 2)
|
||||
write(output_path, json_str)
|
||||
log_trace("Saved Dictionary to $output_path")
|
||||
else
|
||||
log_trace("Received unexpected data type for '$dataname': $(typeof(data))")
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
# Keep listening for 10 seconds
|
||||
sleep(120)
|
||||
NATS.drain(conn)
|
||||
end
|
||||
|
||||
|
||||
# Run the test
|
||||
println("Starting Dictionary transport test...")
|
||||
println("Note: This receiver will wait for messages from the sender.")
|
||||
println("Run test_julia_to_julia_dict_sender.jl first to send test data.")
|
||||
|
||||
# Run receiver
|
||||
println("testing smartreceive")
|
||||
test_dict_receive()
|
||||
|
||||
println("Test completed.")
|
||||
136
test/test_julia_to_julia_dict_sender.jl
Normal file
136
test/test_julia_to_julia_dict_sender.jl
Normal file
@@ -0,0 +1,136 @@
|
||||
#!/usr/bin/env julia
|
||||
# Test script for Dictionary transport testing
|
||||
# Tests sending 1 large and 1 small Dictionaries via direct and link transport
|
||||
# Uses NATSBridge.jl smartsend with "dictionary" type
|
||||
|
||||
using NATS, JSON, UUIDs, Dates, PrettyPrinting, DataFrames, Arrow, HTTP
|
||||
|
||||
# Include the bridge module
|
||||
include("../src/NATSBridge.jl")
|
||||
using .NATSBridge
|
||||
|
||||
# Configuration
|
||||
const SUBJECT = "/NATSBridge_dict_test"
|
||||
const NATS_URL = "nats.yiem.cc"
|
||||
const FILESERVER_URL = "http://192.168.88.104:8080"
|
||||
|
||||
# Create correlation ID for tracing
|
||||
correlation_id = string(uuid4())
|
||||
|
||||
|
||||
# ------------------------------------------------------------------------------------------------ #
|
||||
# test dictionary transfer #
|
||||
# ------------------------------------------------------------------------------------------------ #
|
||||
|
||||
|
||||
# Helper: Log with correlation ID
|
||||
function log_trace(message)
|
||||
timestamp = Dates.now()
|
||||
println("[$timestamp] [Correlation: $correlation_id] $message")
|
||||
end
|
||||
|
||||
|
||||
# File upload handler for plik server
|
||||
function plik_upload_handler(fileserver_url::String, dataname::String, data::Vector{UInt8})::Dict{String, Any}
|
||||
# Get upload ID
|
||||
url_getUploadID = "$fileserver_url/upload"
|
||||
headers = ["Content-Type" => "application/json"]
|
||||
body = """{ "OneShot" : true }"""
|
||||
httpResponse = HTTP.request("POST", url_getUploadID, headers, body; body_is_form=false)
|
||||
responseJson = JSON.parse(String(httpResponse.body))
|
||||
uploadid = responseJson["id"]
|
||||
uploadtoken = responseJson["uploadToken"]
|
||||
|
||||
# Upload file
|
||||
file_multipart = HTTP.Multipart(dataname, IOBuffer(data), "application/octet-stream")
|
||||
url_upload = "$fileserver_url/file/$uploadid"
|
||||
headers = ["X-UploadToken" => uploadtoken]
|
||||
|
||||
form = HTTP.Form(Dict("file" => file_multipart))
|
||||
httpResponse = HTTP.post(url_upload, headers, form)
|
||||
responseJson = JSON.parse(String(httpResponse.body))
|
||||
|
||||
fileid = responseJson["id"]
|
||||
url = "$fileserver_url/file/$uploadid/$fileid/$dataname"
|
||||
|
||||
return Dict("status" => httpResponse.status, "uploadid" => uploadid, "fileid" => fileid, "url" => url)
|
||||
end
|
||||
|
||||
|
||||
# Sender: Send Dictionaries via smartsend
|
||||
function test_dict_send()
|
||||
# Create a small Dictionary (will use direct transport)
|
||||
small_dict = Dict(
|
||||
"name" => "Alice",
|
||||
"age" => 30,
|
||||
"scores" => [95, 88, 92],
|
||||
"metadata" => Dict(
|
||||
"height" => 155,
|
||||
"weight" => 55
|
||||
)
|
||||
)
|
||||
|
||||
# Create a large Dictionary (will use link transport if > 1MB)
|
||||
# Generate a larger dataset (~2MB to ensure link transport)
|
||||
large_dict = Dict(
|
||||
"ids" => collect(1:50000),
|
||||
"names" => ["User_$i" for i in 1:50000],
|
||||
"scores" => rand(1:100, 50000),
|
||||
"categories" => ["Category_$(rand(1:10))" for i in 1:50000],
|
||||
"metadata" => Dict(
|
||||
"source" => "test_generator",
|
||||
"timestamp" => string(Dates.now())
|
||||
)
|
||||
)
|
||||
|
||||
# Test data 1: small Dictionary
|
||||
data1 = ("small_dict", small_dict, "dictionary")
|
||||
|
||||
# Test data 2: large Dictionary
|
||||
data2 = ("large_dict", large_dict, "dictionary")
|
||||
|
||||
# Use smartsend with dictionary type
|
||||
# For small Dictionary: will use direct transport (JSON encoded)
|
||||
# For large Dictionary: will use link transport (uploaded to fileserver)
|
||||
env = NATSBridge.smartsend(
|
||||
SUBJECT,
|
||||
[data1, data2], # List of (dataname, data, type) tuples
|
||||
nats_url = NATS_URL,
|
||||
fileserver_url = FILESERVER_URL,
|
||||
fileserverUploadHandler = plik_upload_handler,
|
||||
size_threshold = 1_000_000, # 1MB threshold
|
||||
correlation_id = correlation_id,
|
||||
msg_purpose = "chat",
|
||||
sender_name = "dict_sender",
|
||||
receiver_name = "",
|
||||
receiver_id = "",
|
||||
reply_to = "",
|
||||
reply_to_msg_id = ""
|
||||
)
|
||||
|
||||
log_trace("Sent message with $(length(env.payloads)) payloads")
|
||||
|
||||
# Log transport type for each payload
|
||||
for (i, payload) in enumerate(env.payloads)
|
||||
log_trace("Payload $i ('$payload.dataname'):")
|
||||
log_trace(" Transport: $(payload.transport)")
|
||||
log_trace(" Type: $(payload.type)")
|
||||
log_trace(" Size: $(payload.size) bytes")
|
||||
log_trace(" Encoding: $(payload.encoding)")
|
||||
|
||||
if payload.transport == "link"
|
||||
log_trace(" URL: $(payload.data)")
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
|
||||
# Run the test
|
||||
println("Starting Dictionary transport test...")
|
||||
println("Correlation ID: $correlation_id")
|
||||
|
||||
# Run sender
|
||||
println("start smartsend for dictionaries")
|
||||
test_dict_send()
|
||||
|
||||
println("Test completed.")
|
||||
84
test/test_julia_to_julia_file_receiver.jl
Normal file
84
test/test_julia_to_julia_file_receiver.jl
Normal file
@@ -0,0 +1,84 @@
|
||||
#!/usr/bin/env julia
|
||||
# Test script for large payload testing using binary transport
|
||||
# Tests sending a large file (> 1MB) via smartsend with binary type
|
||||
# Updated to match NATSBridge.jl API
|
||||
|
||||
using NATS, JSON, UUIDs, Dates, PrettyPrinting, DataFrames, Arrow, HTTP
|
||||
|
||||
|
||||
# workdir =
|
||||
|
||||
# Include the bridge module
|
||||
include("../src/NATSBridge.jl")
|
||||
using .NATSBridge
|
||||
|
||||
# Configuration
|
||||
const SUBJECT = "/NATSBridge_test"
|
||||
const NATS_URL = "nats.yiem.cc"
|
||||
const FILESERVER_URL = "http://192.168.88.104:8080"
|
||||
|
||||
|
||||
|
||||
# ------------------------------------------------------------------------------------------------ #
|
||||
# test file transfer #
|
||||
# ------------------------------------------------------------------------------------------------ #
|
||||
|
||||
# Helper: Log with correlation ID
|
||||
function log_trace(message)
|
||||
timestamp = Dates.now()
|
||||
println("[$timestamp] $message")
|
||||
end
|
||||
|
||||
# Receiver: Listen for messages and verify large payload handling
|
||||
function test_large_binary_receive()
|
||||
conn = NATS.connect(NATS_URL)
|
||||
NATS.subscribe(conn, SUBJECT) do msg
|
||||
log_trace("Received message on $(msg.subject)")
|
||||
|
||||
# Use NATSBridge.smartreceive to handle the data
|
||||
# API: smartreceive(msg, download_handler; max_retries, base_delay, max_delay)
|
||||
result = NATSBridge.smartreceive(
|
||||
msg;
|
||||
max_retries = 5,
|
||||
base_delay = 100,
|
||||
max_delay = 5000
|
||||
)
|
||||
|
||||
# Result is a list of (dataname, data) tuples
|
||||
for (dataname, data, data_type) in result
|
||||
# Check transport type from the envelope
|
||||
# For link transport, data is the URL string
|
||||
# For direct transport, data is the actual payload bytes
|
||||
|
||||
if isa(data, Vector{UInt8})
|
||||
file_size = length(data)
|
||||
log_trace("Received $(file_size) bytes of binary data for '$dataname' of type $data_type")
|
||||
|
||||
# Save received data to a test file
|
||||
output_path = "./new_$dataname"
|
||||
write(output_path, data)
|
||||
log_trace("Saved received data to $output_path")
|
||||
else
|
||||
log_trace("Received $(file_size) bytes of binary data for '$dataname' of type $data_type")
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
# Keep listening for 10 seconds
|
||||
sleep(120)
|
||||
NATS.drain(conn)
|
||||
end
|
||||
|
||||
|
||||
# Run the test
|
||||
println("Starting large binary payload test...")
|
||||
|
||||
# # Run sender first
|
||||
# println("start smartsend")
|
||||
# test_large_binary_send()
|
||||
|
||||
# Run receiver
|
||||
println("testing smartreceive")
|
||||
test_large_binary_receive()
|
||||
|
||||
println("Test completed.")
|
||||
122
test/test_julia_to_julia_file_sender.jl
Normal file
122
test/test_julia_to_julia_file_sender.jl
Normal file
@@ -0,0 +1,122 @@
|
||||
#!/usr/bin/env julia
|
||||
# Test script for large payload testing using binary transport
|
||||
# Tests sending a large file (> 1MB) via smartsend with binary type
|
||||
# Updated to match NATSBridge.jl API
|
||||
|
||||
using NATS, JSON, UUIDs, Dates, PrettyPrinting, DataFrames, Arrow, HTTP
|
||||
|
||||
|
||||
# workdir =
|
||||
|
||||
# Include the bridge module
|
||||
include("../src/NATSBridge.jl")
|
||||
using .NATSBridge
|
||||
|
||||
# Configuration
|
||||
const SUBJECT = "/NATSBridge_test"
|
||||
const NATS_URL = "nats.yiem.cc"
|
||||
const FILESERVER_URL = "http://192.168.88.104:8080"
|
||||
|
||||
# Create correlation ID for tracing
|
||||
correlation_id = string(uuid4())
|
||||
|
||||
|
||||
# ------------------------------------------------------------------------------------------------ #
|
||||
# test file transfer #
|
||||
# ------------------------------------------------------------------------------------------------ #
|
||||
|
||||
|
||||
# Helper: Log with correlation ID
|
||||
function log_trace(message)
|
||||
timestamp = Dates.now()
|
||||
println("[$timestamp] [Correlation: $correlation_id] $message")
|
||||
end
|
||||
|
||||
# File upload handler for plik server
|
||||
function plik_upload_handler(fileserver_url::String, dataname::String, data::Vector{UInt8})::Dict{String, Any}
|
||||
# Get upload ID
|
||||
url_getUploadID = "$fileserver_url/upload"
|
||||
headers = ["Content-Type" => "application/json"]
|
||||
body = """{ "OneShot" : true }"""
|
||||
httpResponse = HTTP.request("POST", url_getUploadID, headers, body; body_is_form=false)
|
||||
responseJson = JSON.parse(String(httpResponse.body))
|
||||
uploadid = responseJson["id"]
|
||||
uploadtoken = responseJson["uploadToken"]
|
||||
|
||||
# Upload file
|
||||
file_multipart = HTTP.Multipart(dataname, IOBuffer(data), "application/octet-stream")
|
||||
url_upload = "$fileserver_url/file/$uploadid"
|
||||
headers = ["X-UploadToken" => uploadtoken]
|
||||
|
||||
form = HTTP.Form(Dict("file" => file_multipart))
|
||||
httpResponse = HTTP.post(url_upload, headers, form)
|
||||
responseJson = JSON.parse(String(httpResponse.body))
|
||||
|
||||
fileid = responseJson["id"]
|
||||
url = "$fileserver_url/file/$uploadid/$fileid/$dataname"
|
||||
|
||||
return Dict("status" => httpResponse.status, "uploadid" => uploadid, "fileid" => fileid, "url" => url)
|
||||
end
|
||||
|
||||
# Sender: Send large binary file via smartsend
|
||||
function test_large_binary_send()
|
||||
# Read the large file as binary data
|
||||
|
||||
# test data 1
|
||||
file_path1 = "./testFile_large.zip"
|
||||
file_data1 = read(file_path1)
|
||||
filename1 = basename(file_path1)
|
||||
data1 = (filename1, file_data1, "binary")
|
||||
|
||||
# test data 2
|
||||
file_path2 = "./testFile_small.zip"
|
||||
file_data2 = read(file_path2)
|
||||
filename2 = basename(file_path2)
|
||||
data2 = (filename2, file_data2, "binary")
|
||||
|
||||
|
||||
|
||||
# Use smartsend with binary type - will automatically use link transport
|
||||
# if file size exceeds the threshold (1MB by default)
|
||||
# API: smartsend(subject, [(dataname, data, type), ...]; keywords...)
|
||||
env = NATSBridge.smartsend(
|
||||
SUBJECT,
|
||||
[data1, data2], # List of (dataname, data, type) tuples
|
||||
nats_url = NATS_URL;
|
||||
fileserver_url = FILESERVER_URL,
|
||||
fileserverUploadHandler = plik_upload_handler,
|
||||
size_threshold = 1_000_000,
|
||||
correlation_id = correlation_id,
|
||||
msg_purpose = "chat",
|
||||
sender_name = "sender",
|
||||
receiver_name = "",
|
||||
receiver_id = "",
|
||||
reply_to = "",
|
||||
reply_to_msg_id = ""
|
||||
)
|
||||
|
||||
log_trace("Sent message with transport: $(env.payloads[1].transport)")
|
||||
log_trace("Envelope type: $(env.payloads[1].type)")
|
||||
|
||||
# Check if link transport was used
|
||||
if env.payloads[1].transport == "link"
|
||||
log_trace("Using link transport - file uploaded to HTTP server")
|
||||
log_trace("URL: $(env.payloads[1].data)")
|
||||
else
|
||||
log_trace("Using direct transport - payload sent via NATS")
|
||||
end
|
||||
end
|
||||
|
||||
# Run the test
|
||||
println("Starting large binary payload test...")
|
||||
println("Correlation ID: $correlation_id")
|
||||
|
||||
# Run sender first
|
||||
println("start smartsend")
|
||||
test_large_binary_send()
|
||||
|
||||
# Run receiver
|
||||
# println("testing smartreceive")
|
||||
# test_large_binary_receive()
|
||||
|
||||
println("Test completed.")
|
||||
238
test/test_julia_to_julia_mix_payload_sender.jl
Normal file
238
test/test_julia_to_julia_mix_payload_sender.jl
Normal file
@@ -0,0 +1,238 @@
|
||||
#!/usr/bin/env julia
|
||||
# Test script for mixed-content message testing
|
||||
# Tests sending a mix of text, json, table, image, audio, video, and binary data
|
||||
# from Julia serviceA to Julia serviceB using NATSBridge.jl smartsend
|
||||
#
|
||||
# This test demonstrates that any combination and any number of mixed content
|
||||
# can be sent and received correctly.
|
||||
|
||||
using NATS, JSON, UUIDs, Dates, PrettyPrinting, DataFrames, Arrow, HTTP, Base64
|
||||
|
||||
# Include the bridge module
|
||||
include("../src/NATSBridge.jl")
|
||||
using .NATSBridge
|
||||
|
||||
# Configuration
|
||||
const SUBJECT = "/NATSBridge_mix_test"
|
||||
const NATS_URL = "nats.yiem.cc"
|
||||
const FILESERVER_URL = "http://192.168.88.104:8080"
|
||||
|
||||
# Create correlation ID for tracing
|
||||
correlation_id = string(uuid4())
|
||||
|
||||
|
||||
# ------------------------------------------------------------------------------------------------ #
|
||||
# test mixed content transfer #
|
||||
# ------------------------------------------------------------------------------------------------ #
|
||||
|
||||
|
||||
# Helper: Log with correlation ID
|
||||
function log_trace(message)
|
||||
timestamp = Dates.now()
|
||||
println("[$timestamp] [Correlation: $correlation_id] $message")
|
||||
end
|
||||
|
||||
|
||||
# File upload handler for plik server
|
||||
function plik_upload_handler(fileserver_url::String, dataname::String, data::Vector{UInt8})::Dict{String, Any}
|
||||
# Get upload ID
|
||||
url_getUploadID = "$fileserver_url/upload"
|
||||
headers = ["Content-Type" => "application/json"]
|
||||
body = """{ "OneShot" : true }"""
|
||||
httpResponse = HTTP.request("POST", url_getUploadID, headers, body; body_is_form=false)
|
||||
responseJson = JSON.parse(String(httpResponse.body))
|
||||
uploadid = responseJson["id"]
|
||||
uploadtoken = responseJson["uploadToken"]
|
||||
|
||||
# Upload file
|
||||
file_multipart = HTTP.Multipart(dataname, IOBuffer(data), "application/octet-stream")
|
||||
url_upload = "$fileserver_url/file/$uploadid"
|
||||
headers = ["X-UploadToken" => uploadtoken]
|
||||
|
||||
form = HTTP.Form(Dict("file" => file_multipart))
|
||||
httpResponse = HTTP.post(url_upload, headers, form)
|
||||
responseJson = JSON.parse(String(httpResponse.body))
|
||||
|
||||
fileid = responseJson["id"]
|
||||
url = "$fileserver_url/file/$uploadid/$fileid/$dataname"
|
||||
|
||||
return Dict("status" => httpResponse.status, "uploadid" => uploadid, "fileid" => fileid, "url" => url)
|
||||
end
|
||||
|
||||
|
||||
# Helper: Create sample data for each type
|
||||
function create_sample_data()
|
||||
# Text data (small - direct transport)
|
||||
text_data = "Hello! This is a test chat message. 🎉\nHow are you doing today? 😊"
|
||||
|
||||
# Dictionary/JSON data (medium - could be direct or link)
|
||||
dict_data = Dict(
|
||||
"type" => "chat",
|
||||
"sender" => "serviceA",
|
||||
"receiver" => "serviceB",
|
||||
"metadata" => Dict(
|
||||
"timestamp" => string(Dates.now()),
|
||||
"priority" => "high",
|
||||
"tags" => ["urgent", "chat", "test"]
|
||||
),
|
||||
"content" => Dict(
|
||||
"text" => "This is a JSON-formatted chat message with nested structure.",
|
||||
"format" => "markdown",
|
||||
"mentions" => ["user1", "user2"]
|
||||
)
|
||||
)
|
||||
|
||||
# Table data (DataFrame - small - direct transport)
|
||||
table_data_small = DataFrame(
|
||||
id = 1:10,
|
||||
message = ["msg_$i" for i in 1:10],
|
||||
sender = ["sender_$i" for i in 1:10],
|
||||
timestamp = [string(Dates.now()) for _ in 1:10],
|
||||
priority = rand(1:3, 10)
|
||||
)
|
||||
|
||||
# Table data (DataFrame - large - link transport)
|
||||
# ~1.5MB of data (150,000 rows) - should trigger link transport
|
||||
table_data_large = DataFrame(
|
||||
id = 1:150_000,
|
||||
message = ["msg_$i" for i in 1:150_000],
|
||||
sender = ["sender_$i" for i in 1:150_000],
|
||||
timestamp = [string(Dates.now()) for i in 1:150_000],
|
||||
priority = rand(1:3, 150_000)
|
||||
)
|
||||
|
||||
# Image data (small binary - direct transport)
|
||||
# Create a simple 10x10 pixel PNG-like data (128 bytes header + 100 pixels = 112 bytes)
|
||||
# Using simple RGB data (10*10*3 = 300 bytes of pixel data)
|
||||
image_width = 10
|
||||
image_height = 10
|
||||
image_data = UInt8[]
|
||||
# PNG header (simplified)
|
||||
push!(image_data, 0x89, 0x50, 0x4E, 0x47, 0x0D, 0x0A, 0x1A, 0x0A)
|
||||
# Simple RGB data (RGBRGBRGB...)
|
||||
for i in 1:image_width*image_height
|
||||
push!(image_data, 0xFF, 0x00, 0x00) # Red pixel
|
||||
end
|
||||
|
||||
# Image data (large - link transport)
|
||||
# Create a larger image (~1.5MB) to test link transport
|
||||
large_image_width = 500
|
||||
large_image_height = 1000
|
||||
large_image_data = UInt8[]
|
||||
# PNG header (simplified for 500x1000)
|
||||
push!(large_image_data, 0x89, 0x50, 0x4E, 0x47, 0x0D, 0x0A, 0x1A, 0x0A)
|
||||
# RGB data (500*1000*3 = 1,500,000 bytes)
|
||||
for i in 1:large_image_width*large_image_height
|
||||
push!(large_image_data, rand(1:255), rand(1:255), rand(1:255)) # Random color pixels
|
||||
end
|
||||
|
||||
# Audio data (small binary - direct transport)
|
||||
audio_data = UInt8[rand(1:255) for _ in 1:100]
|
||||
|
||||
# Audio data (large - link transport)
|
||||
# ~1.5MB of audio-like data
|
||||
large_audio_data = UInt8[rand(1:255) for _ in 1:1_500_000]
|
||||
|
||||
# Video data (small binary - direct transport)
|
||||
video_data = UInt8[rand(1:255) for _ in 1:150]
|
||||
|
||||
# Video data (large - link transport)
|
||||
# ~1.5MB of video-like data
|
||||
large_video_data = UInt8[rand(1:255) for _ in 1:1_500_000]
|
||||
|
||||
# Binary data (small - direct transport)
|
||||
binary_data = UInt8[rand(1:255) for _ in 1:200]
|
||||
|
||||
# Binary data (large - link transport)
|
||||
# ~1.5MB of binary data
|
||||
large_binary_data = UInt8[rand(1:255) for _ in 1:1_500_000]
|
||||
|
||||
return (
|
||||
text_data,
|
||||
dict_data,
|
||||
table_data_small,
|
||||
table_data_large,
|
||||
image_data,
|
||||
large_image_data,
|
||||
audio_data,
|
||||
large_audio_data,
|
||||
video_data,
|
||||
large_video_data,
|
||||
binary_data,
|
||||
large_binary_data
|
||||
)
|
||||
end
|
||||
|
||||
|
||||
# Sender: Send mixed content via smartsend
|
||||
function test_mix_send()
|
||||
# Create sample data
|
||||
(text_data, dict_data, table_data_small, table_data_large, image_data, large_image_data, audio_data, large_audio_data, video_data, large_video_data, binary_data, large_binary_data) = create_sample_data()
|
||||
|
||||
# Create payloads list - mixed content with both small and large data
|
||||
# Small data uses direct transport, large data uses link transport
|
||||
payloads = [
|
||||
# Small data (direct transport) - text, dictionary, small table
|
||||
("chat_text", text_data, "text"),
|
||||
("chat_json", dict_data, "dictionary"),
|
||||
("chat_table_small", table_data_small, "table"),
|
||||
|
||||
# Large data (link transport) - large table, large image, large audio, large video, large binary
|
||||
("chat_table_large", table_data_large, "table"),
|
||||
("user_image_large", large_image_data, "image"),
|
||||
("audio_clip_large", large_audio_data, "audio"),
|
||||
("video_clip_large", large_video_data, "video"),
|
||||
("binary_file_large", large_binary_data, "binary")
|
||||
]
|
||||
|
||||
# Use smartsend with mixed content
|
||||
env = NATSBridge.smartsend(
|
||||
SUBJECT,
|
||||
payloads, # List of (dataname, data, type) tuples
|
||||
nats_url = NATS_URL,
|
||||
fileserver_url = FILESERVER_URL,
|
||||
fileserverUploadHandler = plik_upload_handler,
|
||||
size_threshold = 1_000_000, # 1MB threshold
|
||||
correlation_id = correlation_id,
|
||||
msg_purpose = "chat",
|
||||
sender_name = "mix_sender",
|
||||
receiver_name = "",
|
||||
receiver_id = "",
|
||||
reply_to = "",
|
||||
reply_to_msg_id = ""
|
||||
)
|
||||
|
||||
log_trace("Sent message with $(length(env.payloads)) payloads")
|
||||
|
||||
# Log transport type for each payload
|
||||
for (i, payload) in enumerate(env.payloads)
|
||||
log_trace("Payload $i ('$payload.dataname'):")
|
||||
log_trace(" Transport: $(payload.transport)")
|
||||
log_trace(" Type: $(payload.type)")
|
||||
log_trace(" Size: $(payload.size) bytes")
|
||||
log_trace(" Encoding: $(payload.encoding)")
|
||||
|
||||
if payload.transport == "link"
|
||||
log_trace(" URL: $(payload.data)")
|
||||
end
|
||||
end
|
||||
|
||||
# Summary
|
||||
println("\n--- Transport Summary ---")
|
||||
direct_count = count(p -> p.transport == "direct", env.payloads)
|
||||
link_count = count(p -> p.transport == "link", env.payloads)
|
||||
log_trace("Direct transport: $direct_count payloads")
|
||||
log_trace("Link transport: $link_count payloads")
|
||||
end
|
||||
|
||||
|
||||
# Run the test
|
||||
println("Starting mixed-content transport test...")
|
||||
println("Correlation ID: $correlation_id")
|
||||
|
||||
# Run sender
|
||||
println("start smartsend for mixed content")
|
||||
test_mix_send()
|
||||
|
||||
println("\nTest completed.")
|
||||
println("Note: Run test_julia_to_julia_mix_receiver.jl to receive the messages.")
|
||||
228
test/test_julia_to_julia_mix_payloads_receiver.jl
Normal file
228
test/test_julia_to_julia_mix_payloads_receiver.jl
Normal file
@@ -0,0 +1,228 @@
|
||||
#!/usr/bin/env julia
|
||||
# Test script for mixed-content message testing
|
||||
# Tests receiving a mix of text, json, table, image, audio, video, and binary data
|
||||
# from Julia serviceA to Julia serviceB using NATSBridge.jl smartreceive
|
||||
#
|
||||
# This test demonstrates that any combination and any number of mixed content
|
||||
# can be sent and received correctly.
|
||||
|
||||
using NATS, JSON, UUIDs, Dates, PrettyPrinting, DataFrames, Arrow, HTTP, Base64
|
||||
|
||||
# Include the bridge module
|
||||
include("../src/NATSBridge.jl")
|
||||
using .NATSBridge
|
||||
|
||||
# Configuration
|
||||
const SUBJECT = "/NATSBridge_mix_test"
|
||||
const NATS_URL = "nats.yiem.cc"
|
||||
const FILESERVER_URL = "http://192.168.88.104:8080"
|
||||
|
||||
|
||||
# ------------------------------------------------------------------------------------------------ #
|
||||
# test mixed content transfer #
|
||||
# ------------------------------------------------------------------------------------------------ #
|
||||
|
||||
|
||||
# Helper: Log with correlation ID
|
||||
function log_trace(message)
|
||||
timestamp = Dates.now()
|
||||
println("[$timestamp] $message")
|
||||
end
|
||||
|
||||
|
||||
# Receiver: Listen for messages and verify mixed content handling
|
||||
function test_mix_receive()
|
||||
conn = NATS.connect(NATS_URL)
|
||||
NATS.subscribe(conn, SUBJECT) do msg
|
||||
log_trace("Received message on $(msg.subject)")
|
||||
|
||||
# Use NATSBridge.smartreceive to handle the data
|
||||
# API: smartreceive(msg, download_handler; max_retries, base_delay, max_delay)
|
||||
result = NATSBridge.smartreceive(
|
||||
msg;
|
||||
max_retries = 5,
|
||||
base_delay = 100,
|
||||
max_delay = 5000
|
||||
)
|
||||
|
||||
log_trace("Received $(length(result)) payloads")
|
||||
|
||||
# Result is a list of (dataname, data, data_type) tuples
|
||||
for (dataname, data, data_type) in result
|
||||
log_trace("\n=== Payload: $dataname (type: $data_type) ===")
|
||||
|
||||
# Handle different data types
|
||||
if data_type == "text"
|
||||
# Text data - should be a String
|
||||
if isa(data, String)
|
||||
log_trace(" Type: String")
|
||||
log_trace(" Length: $(length(data)) characters")
|
||||
|
||||
# Display first 200 characters
|
||||
if length(data) > 200
|
||||
log_trace(" First 200 chars: $(data[1:200])...")
|
||||
else
|
||||
log_trace(" Content: $data")
|
||||
end
|
||||
|
||||
# Save to file
|
||||
output_path = "./received_$dataname.txt"
|
||||
write(output_path, data)
|
||||
log_trace(" Saved to: $output_path")
|
||||
else
|
||||
log_trace(" ERROR: Expected String, got $(typeof(data))")
|
||||
end
|
||||
|
||||
elseif data_type == "dictionary"
|
||||
# Dictionary data - should be JSON object
|
||||
if isa(data, JSON.Object{String, Any})
|
||||
log_trace(" Type: Dict")
|
||||
log_trace(" Keys: $(keys(data))")
|
||||
|
||||
# Display nested content
|
||||
for (key, value) in data
|
||||
log_trace(" $key => $value")
|
||||
end
|
||||
|
||||
# Save to JSON file
|
||||
output_path = "./received_$dataname.json"
|
||||
json_str = JSON.json(data, 2)
|
||||
write(output_path, json_str)
|
||||
log_trace(" Saved to: $output_path")
|
||||
else
|
||||
log_trace(" ERROR: Expected Dict, got $(typeof(data))")
|
||||
end
|
||||
|
||||
elseif data_type == "table"
|
||||
# Table data - should be a DataFrame
|
||||
data = DataFrame(data)
|
||||
if isa(data, DataFrame)
|
||||
log_trace(" Type: DataFrame")
|
||||
log_trace(" Dimensions: $(size(data, 1)) rows x $(size(data, 2)) columns")
|
||||
log_trace(" Columns: $(names(data))")
|
||||
|
||||
# Display first few rows
|
||||
log_trace(" First 5 rows:")
|
||||
display(data[1:min(5, size(data, 1)), :])
|
||||
|
||||
# Save to Arrow file
|
||||
output_path = "./received_$dataname.arrow"
|
||||
io = IOBuffer()
|
||||
Arrow.write(io, data)
|
||||
write(output_path, take!(io))
|
||||
log_trace(" Saved to: $output_path")
|
||||
else
|
||||
log_trace(" ERROR: Expected DataFrame, got $(typeof(data))")
|
||||
end
|
||||
|
||||
elseif data_type == "image"
|
||||
# Image data - should be Vector{UInt8}
|
||||
if isa(data, Vector{UInt8})
|
||||
log_trace(" Type: Vector{UInt8} (binary)")
|
||||
log_trace(" Size: $(length(data)) bytes")
|
||||
|
||||
# Save to file
|
||||
output_path = "./received_$dataname.bin"
|
||||
write(output_path, data)
|
||||
log_trace(" Saved to: $output_path")
|
||||
else
|
||||
log_trace(" ERROR: Expected Vector{UInt8}, got $(typeof(data))")
|
||||
end
|
||||
|
||||
elseif data_type == "audio"
|
||||
# Audio data - should be Vector{UInt8}
|
||||
if isa(data, Vector{UInt8})
|
||||
log_trace(" Type: Vector{UInt8} (binary)")
|
||||
log_trace(" Size: $(length(data)) bytes")
|
||||
|
||||
# Save to file
|
||||
output_path = "./received_$dataname.bin"
|
||||
write(output_path, data)
|
||||
log_trace(" Saved to: $output_path")
|
||||
else
|
||||
log_trace(" ERROR: Expected Vector{UInt8}, got $(typeof(data))")
|
||||
end
|
||||
|
||||
elseif data_type == "video"
|
||||
# Video data - should be Vector{UInt8}
|
||||
if isa(data, Vector{UInt8})
|
||||
log_trace(" Type: Vector{UInt8} (binary)")
|
||||
log_trace(" Size: $(length(data)) bytes")
|
||||
|
||||
# Save to file
|
||||
output_path = "./received_$dataname.bin"
|
||||
write(output_path, data)
|
||||
log_trace(" Saved to: $output_path")
|
||||
else
|
||||
log_trace(" ERROR: Expected Vector{UInt8}, got $(typeof(data))")
|
||||
end
|
||||
|
||||
elseif data_type == "binary"
|
||||
# Binary data - should be Vector{UInt8}
|
||||
if isa(data, Vector{UInt8})
|
||||
log_trace(" Type: Vector{UInt8} (binary)")
|
||||
log_trace(" Size: $(length(data)) bytes")
|
||||
|
||||
# Save to file
|
||||
output_path = "./received_$dataname.bin"
|
||||
write(output_path, data)
|
||||
log_trace(" Saved to: $output_path")
|
||||
else
|
||||
log_trace(" ERROR: Expected Vector{UInt8}, got $(typeof(data))")
|
||||
end
|
||||
|
||||
else
|
||||
log_trace(" ERROR: Unknown data type '$data_type'")
|
||||
end
|
||||
end
|
||||
|
||||
# Summary
|
||||
println("\n=== Verification Summary ===")
|
||||
text_count = count(x -> x[3] == "text", result)
|
||||
dict_count = count(x -> x[3] == "dictionary", result)
|
||||
table_count = count(x -> x[3] == "table", result)
|
||||
image_count = count(x -> x[3] == "image", result)
|
||||
audio_count = count(x -> x[3] == "audio", result)
|
||||
video_count = count(x -> x[3] == "video", result)
|
||||
binary_count = count(x -> x[3] == "binary", result)
|
||||
|
||||
log_trace("Text payloads: $text_count")
|
||||
log_trace("Dictionary payloads: $dict_count")
|
||||
log_trace("Table payloads: $table_count")
|
||||
log_trace("Image payloads: $image_count")
|
||||
log_trace("Audio payloads: $audio_count")
|
||||
log_trace("Video payloads: $video_count")
|
||||
log_trace("Binary payloads: $binary_count")
|
||||
|
||||
# Print transport type info for each payload if available
|
||||
println("\n=== Payload Details ===")
|
||||
for (dataname, data, data_type) in result
|
||||
if data_type in ["image", "audio", "video", "binary"]
|
||||
log_trace("$dataname: $(length(data)) bytes (binary)")
|
||||
elseif data_type == "table"
|
||||
data = DataFrame(data)
|
||||
log_trace("$dataname: $(size(data, 1)) rows x $(size(data, 2)) columns (DataFrame)")
|
||||
elseif data_type == "dictionary"
|
||||
log_trace("$dataname: $(length(JSON.json(data))) bytes (Dict)")
|
||||
elseif data_type == "text"
|
||||
log_trace("$dataname: $(length(data)) characters (String)")
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
# Keep listening for 2 minutes
|
||||
sleep(120)
|
||||
NATS.drain(conn)
|
||||
end
|
||||
|
||||
|
||||
# Run the test
|
||||
println("Starting mixed-content transport test...")
|
||||
println("Note: This receiver will wait for messages from the sender.")
|
||||
println("Run test_julia_to_julia_mix_sender.jl first to send test data.")
|
||||
|
||||
# Run receiver
|
||||
println("\ntesting smartreceive for mixed content")
|
||||
test_mix_receive()
|
||||
|
||||
println("\nTest completed.")
|
||||
84
test/test_julia_to_julia_table_receiver.jl
Normal file
84
test/test_julia_to_julia_table_receiver.jl
Normal file
@@ -0,0 +1,84 @@
|
||||
#!/usr/bin/env julia
|
||||
# Test script for DataFrame table transport testing
|
||||
# Tests receiving 1 large and 1 small DataFrames via direct and link transport
|
||||
# Uses NATSBridge.jl smartreceive with "table" type
|
||||
|
||||
using NATS, JSON, UUIDs, Dates, PrettyPrinting, DataFrames, Arrow, HTTP
|
||||
|
||||
# Include the bridge module
|
||||
include("../src/NATSBridge.jl")
|
||||
using .NATSBridge
|
||||
|
||||
# Configuration
|
||||
const SUBJECT = "/NATSBridge_table_test"
|
||||
const NATS_URL = "nats.yiem.cc"
|
||||
const FILESERVER_URL = "http://192.168.88.104:8080"
|
||||
|
||||
|
||||
# ------------------------------------------------------------------------------------------------ #
|
||||
# test table transfer #
|
||||
# ------------------------------------------------------------------------------------------------ #
|
||||
|
||||
|
||||
# Helper: Log with correlation ID
|
||||
function log_trace(message)
|
||||
timestamp = Dates.now()
|
||||
println("[$timestamp] $message")
|
||||
end
|
||||
|
||||
|
||||
# Receiver: Listen for messages and verify DataFrame table handling
|
||||
function test_table_receive()
|
||||
conn = NATS.connect(NATS_URL)
|
||||
NATS.subscribe(conn, SUBJECT) do msg
|
||||
log_trace("Received message on $(msg.subject)")
|
||||
|
||||
# Use NATSBridge.smartreceive to handle the data
|
||||
# API: smartreceive(msg, download_handler; max_retries, base_delay, max_delay)
|
||||
result = NATSBridge.smartreceive(
|
||||
msg;
|
||||
max_retries = 5,
|
||||
base_delay = 100,
|
||||
max_delay = 5000
|
||||
)
|
||||
|
||||
# Result is a list of (dataname, data, data_type) tuples
|
||||
for (dataname, data, data_type) in result
|
||||
data = DataFrame(data)
|
||||
if isa(data, DataFrame)
|
||||
log_trace("Received DataFrame '$dataname' of type $data_type")
|
||||
log_trace(" Dimensions: $(size(data, 1)) rows x $(size(data, 2)) columns")
|
||||
log_trace(" Column names: $(names(data))")
|
||||
|
||||
# Display first few rows
|
||||
println(" First 5 rows:")
|
||||
display(data[1:min(5, size(data, 1)), :])
|
||||
|
||||
# Save to file
|
||||
output_path = "./received_$dataname.arrow"
|
||||
io = IOBuffer()
|
||||
Arrow.write(io, data)
|
||||
write(output_path, take!(io))
|
||||
log_trace("Saved DataFrame to $output_path")
|
||||
else
|
||||
log_trace("Received unexpected data type for '$dataname': $(typeof(data))")
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
# Keep listening for 10 seconds
|
||||
sleep(120)
|
||||
NATS.drain(conn)
|
||||
end
|
||||
|
||||
|
||||
# Run the test
|
||||
println("Starting DataFrame table transport test...")
|
||||
println("Note: This receiver will wait for messages from the sender.")
|
||||
println("Run test_julia_to_julia_table_sender.jl first to send test data.")
|
||||
|
||||
# Run receiver
|
||||
println("testing smartreceive")
|
||||
test_table_receive()
|
||||
|
||||
println("Test completed.")
|
||||
134
test/test_julia_to_julia_table_sender.jl
Normal file
134
test/test_julia_to_julia_table_sender.jl
Normal file
@@ -0,0 +1,134 @@
|
||||
#!/usr/bin/env julia
|
||||
# Test script for DataFrame table transport testing
|
||||
# Tests sending 1 large and 1 small DataFrames via direct and link transport
|
||||
# Uses NATSBridge.jl smartsend with "table" type
|
||||
|
||||
using NATS, JSON, UUIDs, Dates, PrettyPrinting, DataFrames, Arrow, HTTP
|
||||
|
||||
# Include the bridge module
|
||||
include("../src/NATSBridge.jl")
|
||||
using .NATSBridge
|
||||
|
||||
# Configuration
|
||||
const SUBJECT = "/NATSBridge_table_test"
|
||||
const NATS_URL = "nats.yiem.cc"
|
||||
const FILESERVER_URL = "http://192.168.88.104:8080"
|
||||
|
||||
# Create correlation ID for tracing
|
||||
correlation_id = string(uuid4())
|
||||
|
||||
|
||||
# ------------------------------------------------------------------------------------------------ #
|
||||
# test table transfer #
|
||||
# ------------------------------------------------------------------------------------------------ #
|
||||
|
||||
|
||||
# Helper: Log with correlation ID
|
||||
function log_trace(message)
|
||||
timestamp = Dates.now()
|
||||
println("[$timestamp] [Correlation: $correlation_id] $message")
|
||||
end
|
||||
|
||||
|
||||
# File upload handler for plik server
|
||||
function plik_upload_handler(fileserver_url::String, dataname::String, data::Vector{UInt8})::Dict{String, Any}
|
||||
# Get upload ID
|
||||
url_getUploadID = "$fileserver_url/upload"
|
||||
headers = ["Content-Type" => "application/json"]
|
||||
body = """{ "OneShot" : true }"""
|
||||
httpResponse = HTTP.request("POST", url_getUploadID, headers, body; body_is_form=false)
|
||||
responseJson = JSON.parse(String(httpResponse.body))
|
||||
uploadid = responseJson["id"]
|
||||
uploadtoken = responseJson["uploadToken"]
|
||||
|
||||
# Upload file
|
||||
file_multipart = HTTP.Multipart(dataname, IOBuffer(data), "application/octet-stream")
|
||||
url_upload = "$fileserver_url/file/$uploadid"
|
||||
headers = ["X-UploadToken" => uploadtoken]
|
||||
|
||||
form = HTTP.Form(Dict("file" => file_multipart))
|
||||
httpResponse = HTTP.post(url_upload, headers, form)
|
||||
responseJson = JSON.parse(String(httpResponse.body))
|
||||
|
||||
fileid = responseJson["id"]
|
||||
url = "$fileserver_url/file/$uploadid/$fileid/$dataname"
|
||||
|
||||
return Dict("status" => httpResponse.status, "uploadid" => uploadid, "fileid" => fileid, "url" => url)
|
||||
end
|
||||
|
||||
|
||||
# Sender: Send DataFrame tables via smartsend
|
||||
function test_table_send()
|
||||
# Create a small DataFrame (will use direct transport)
|
||||
small_df = DataFrame(
|
||||
id = 1:10,
|
||||
name = ["Alice", "Bob", "Charlie", "Diana", "Eve", "Frank", "Grace", "Henry", "Ivy", "Jack"],
|
||||
score = [95, 88, 92, 85, 90, 78, 95, 88, 92, 85],
|
||||
category = ["A", "B", "A", "B", "A", "B", "A", "B", "A", "B"]
|
||||
)
|
||||
|
||||
# Create a large DataFrame (will use link transport if > 1MB)
|
||||
# Generate a larger dataset (~2MB to ensure link transport)
|
||||
large_ids = 1:50000
|
||||
large_names = ["User_$i" for i in 1:50000]
|
||||
large_scores = rand(1:100, 50000)
|
||||
large_categories = ["Category_$(rand(1:10))" for i in 1:50000]
|
||||
|
||||
large_df = DataFrame(
|
||||
id = large_ids,
|
||||
name = large_names,
|
||||
score = large_scores,
|
||||
category = large_categories
|
||||
)
|
||||
|
||||
# Test data 1: small DataFrame
|
||||
data1 = ("small_table", small_df, "table")
|
||||
|
||||
# Test data 2: large DataFrame
|
||||
data2 = ("large_table", large_df, "table")
|
||||
|
||||
# Use smartsend with table type
|
||||
# For small DataFrame: will use direct transport (Base64 encoded Arrow IPC)
|
||||
# For large DataFrame: will use link transport (uploaded to fileserver)
|
||||
env = NATSBridge.smartsend(
|
||||
SUBJECT,
|
||||
[data1, data2], # List of (dataname, data, type) tuples
|
||||
nats_url = NATS_URL,
|
||||
fileserver_url = FILESERVER_URL,
|
||||
fileserverUploadHandler = plik_upload_handler,
|
||||
size_threshold = 1_000_000, # 1MB threshold
|
||||
correlation_id = correlation_id,
|
||||
msg_purpose = "chat",
|
||||
sender_name = "table_sender",
|
||||
receiver_name = "",
|
||||
receiver_id = "",
|
||||
reply_to = "",
|
||||
reply_to_msg_id = ""
|
||||
)
|
||||
|
||||
log_trace("Sent message with $(length(env.payloads)) payloads")
|
||||
|
||||
# Log transport type for each payload
|
||||
for (i, payload) in enumerate(env.payloads)
|
||||
log_trace("Payload $i ('$payload.dataname'):")
|
||||
log_trace(" Transport: $(payload.transport)")
|
||||
log_trace(" Type: $(payload.type)")
|
||||
log_trace(" Size: $(payload.size) bytes")
|
||||
log_trace(" Encoding: $(payload.encoding)")
|
||||
|
||||
if payload.transport == "link"
|
||||
log_trace(" URL: $(payload.data)")
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
|
||||
# Run the test
|
||||
println("Starting DataFrame table transport test...")
|
||||
println("Correlation ID: $correlation_id")
|
||||
|
||||
# Run sender
|
||||
println("start smartsend for tables")
|
||||
test_table_send()
|
||||
|
||||
println("Test completed.")
|
||||
83
test/test_julia_to_julia_text_receiver.jl
Normal file
83
test/test_julia_to_julia_text_receiver.jl
Normal file
@@ -0,0 +1,83 @@
|
||||
#!/usr/bin/env julia
|
||||
# Test script for text transport testing
|
||||
# Tests receiving 1 large and 1 small text from Julia serviceA to Julia serviceB
|
||||
# Uses NATSBridge.jl smartreceive with "text" type
|
||||
|
||||
using NATS, JSON, UUIDs, Dates, PrettyPrinting, DataFrames, Arrow, HTTP
|
||||
|
||||
# Include the bridge module
|
||||
include("../src/NATSBridge.jl")
|
||||
using .NATSBridge
|
||||
|
||||
# Configuration
|
||||
const SUBJECT = "/NATSBridge_text_test"
|
||||
const NATS_URL = "nats.yiem.cc"
|
||||
const FILESERVER_URL = "http://192.168.88.104:8080"
|
||||
|
||||
|
||||
# ------------------------------------------------------------------------------------------------ #
|
||||
# test text transfer #
|
||||
# ------------------------------------------------------------------------------------------------ #
|
||||
|
||||
|
||||
# Helper: Log with correlation ID
|
||||
function log_trace(message)
|
||||
timestamp = Dates.now()
|
||||
println("[$timestamp] $message")
|
||||
end
|
||||
|
||||
|
||||
# Receiver: Listen for messages and verify text handling
|
||||
function test_text_receive()
|
||||
conn = NATS.connect(NATS_URL)
|
||||
NATS.subscribe(conn, SUBJECT) do msg
|
||||
log_trace("Received message on $(msg.subject)")
|
||||
|
||||
# Use NATSBridge.smartreceive to handle the data
|
||||
# API: smartreceive(msg, download_handler; max_retries, base_delay, max_delay)
|
||||
result = NATSBridge.smartreceive(
|
||||
msg;
|
||||
max_retries = 5,
|
||||
base_delay = 100,
|
||||
max_delay = 5000
|
||||
)
|
||||
|
||||
# Result is a list of (dataname, data, data_type) tuples
|
||||
for (dataname, data, data_type) in result
|
||||
if isa(data, String)
|
||||
log_trace("Received text '$dataname' of type $data_type")
|
||||
log_trace(" Length: $(length(data)) characters")
|
||||
|
||||
# Display first 100 characters
|
||||
if length(data) > 100
|
||||
log_trace(" First 100 characters: $(data[1:100])...")
|
||||
else
|
||||
log_trace(" Content: $data")
|
||||
end
|
||||
|
||||
# Save to file
|
||||
output_path = "./received_$dataname.txt"
|
||||
write(output_path, data)
|
||||
log_trace("Saved text to $output_path")
|
||||
else
|
||||
log_trace("Received unexpected data type for '$dataname': $(typeof(data))")
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
# Keep listening for 10 seconds
|
||||
sleep(120)
|
||||
NATS.drain(conn)
|
||||
end
|
||||
|
||||
|
||||
# Run the test
|
||||
println("Starting text transport test...")
|
||||
println("Note: This receiver will wait for messages from the sender.")
|
||||
println("Run test_julia_to_julia_text_sender.jl first to send test data.")
|
||||
|
||||
# Run receiver
|
||||
println("testing smartreceive for text")
|
||||
test_text_receive()
|
||||
|
||||
println("Test completed.")
|
||||
119
test/test_julia_to_julia_text_sender.jl
Normal file
119
test/test_julia_to_julia_text_sender.jl
Normal file
@@ -0,0 +1,119 @@
|
||||
#!/usr/bin/env julia
|
||||
# Test script for text transport testing
|
||||
# Tests sending 1 large and 1 small text from Julia serviceA to Julia serviceB
|
||||
# Uses NATSBridge.jl smartsend with "text" type
|
||||
|
||||
using NATS, JSON, UUIDs, Dates, PrettyPrinting, DataFrames, Arrow, HTTP
|
||||
|
||||
# Include the bridge module
|
||||
include("../src/NATSBridge.jl")
|
||||
using .NATSBridge
|
||||
|
||||
# Configuration
|
||||
const SUBJECT = "/NATSBridge_text_test"
|
||||
const NATS_URL = "nats.yiem.cc"
|
||||
const FILESERVER_URL = "http://192.168.88.104:8080"
|
||||
|
||||
# Create correlation ID for tracing
|
||||
correlation_id = string(uuid4())
|
||||
|
||||
|
||||
# ------------------------------------------------------------------------------------------------ #
|
||||
# test text transfer #
|
||||
# ------------------------------------------------------------------------------------------------ #
|
||||
|
||||
|
||||
# Helper: Log with correlation ID
|
||||
function log_trace(message)
|
||||
timestamp = Dates.now()
|
||||
println("[$timestamp] [Correlation: $correlation_id] $message")
|
||||
end
|
||||
|
||||
|
||||
# File upload handler for plik server
|
||||
function plik_upload_handler(fileserver_url::String, dataname::String, data::Vector{UInt8})::Dict{String, Any}
|
||||
# Get upload ID
|
||||
url_getUploadID = "$fileserver_url/upload"
|
||||
headers = ["Content-Type" => "application/json"]
|
||||
body = """{ "OneShot" : true }"""
|
||||
httpResponse = HTTP.request("POST", url_getUploadID, headers, body; body_is_form=false)
|
||||
responseJson = JSON.parse(String(httpResponse.body))
|
||||
uploadid = responseJson["id"]
|
||||
uploadtoken = responseJson["uploadToken"]
|
||||
|
||||
# Upload file
|
||||
file_multipart = HTTP.Multipart(dataname, IOBuffer(data), "application/octet-stream")
|
||||
url_upload = "$fileserver_url/file/$uploadid"
|
||||
headers = ["X-UploadToken" => uploadtoken]
|
||||
|
||||
form = HTTP.Form(Dict("file" => file_multipart))
|
||||
httpResponse = HTTP.post(url_upload, headers, form)
|
||||
responseJson = JSON.parse(String(httpResponse.body))
|
||||
|
||||
fileid = responseJson["id"]
|
||||
url = "$fileserver_url/file/$uploadid/$fileid/$dataname"
|
||||
|
||||
return Dict("status" => httpResponse.status, "uploadid" => uploadid, "fileid" => fileid, "url" => url)
|
||||
end
|
||||
|
||||
|
||||
# Sender: Send text via smartsend
|
||||
function test_text_send()
|
||||
# Create a small text (will use direct transport)
|
||||
small_text = "Hello, this is a small text message. Testing direct transport via NATS."
|
||||
|
||||
# Create a large text (will use link transport if > 1MB)
|
||||
# Generate a larger text (~2MB to ensure link transport)
|
||||
large_text = join(["Line $i: This is a sample text line with some content to pad the size. " for i in 1:50000], "")
|
||||
|
||||
# Test data 1: small text
|
||||
data1 = ("small_text", small_text, "text")
|
||||
|
||||
# Test data 2: large text
|
||||
data2 = ("large_text", large_text, "text")
|
||||
|
||||
# Use smartsend with text type
|
||||
# For small text: will use direct transport (Base64 encoded UTF-8)
|
||||
# For large text: will use link transport (uploaded to fileserver)
|
||||
env = NATSBridge.smartsend(
|
||||
SUBJECT,
|
||||
[data1, data2], # List of (dataname, data, type) tuples
|
||||
nats_url = NATS_URL,
|
||||
fileserver_url = FILESERVER_URL,
|
||||
fileserverUploadHandler = plik_upload_handler,
|
||||
size_threshold = 1_000_000, # 1MB threshold
|
||||
correlation_id = correlation_id,
|
||||
msg_purpose = "chat",
|
||||
sender_name = "text_sender",
|
||||
receiver_name = "",
|
||||
receiver_id = "",
|
||||
reply_to = "",
|
||||
reply_to_msg_id = ""
|
||||
)
|
||||
|
||||
log_trace("Sent message with $(length(env.payloads)) payloads")
|
||||
|
||||
# Log transport type for each payload
|
||||
for (i, payload) in enumerate(env.payloads)
|
||||
log_trace("Payload $i ('$payload.dataname'):")
|
||||
log_trace(" Transport: $(payload.transport)")
|
||||
log_trace(" Type: $(payload.type)")
|
||||
log_trace(" Size: $(payload.size) bytes")
|
||||
log_trace(" Encoding: $(payload.encoding)")
|
||||
|
||||
if payload.transport == "link"
|
||||
log_trace(" URL: $(payload.data)")
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
|
||||
# Run the test
|
||||
println("Starting text transport test...")
|
||||
println("Correlation ID: $correlation_id")
|
||||
|
||||
# Run sender
|
||||
println("start smartsend for text")
|
||||
test_text_send()
|
||||
|
||||
println("Test completed.")
|
||||
634
tutorial_julia.md
Normal file
634
tutorial_julia.md
Normal file
@@ -0,0 +1,634 @@
|
||||
# NATSBridge.jl Tutorial
|
||||
|
||||
A comprehensive tutorial for learning how to use NATSBridge.jl for bi-directional communication between Julia and JavaScript services using NATS.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
1. [What is NATSBridge.jl?](#what-is-natsbridgejl)
|
||||
2. [Key Concepts](#key-concepts)
|
||||
3. [Installation](#installation)
|
||||
4. [Basic Usage](#basic-usage)
|
||||
5. [Payload Types](#payload-types)
|
||||
6. [Transport Strategies](#transport-strategies)
|
||||
7. [Advanced Features](#advanced-features)
|
||||
8. [Complete Examples](#complete-examples)
|
||||
|
||||
---
|
||||
|
||||
## What is NATSBridge.jl?
|
||||
|
||||
NATSBridge.jl is a Julia module that provides a high-level API for sending and receiving data across network boundaries using NATS as the message bus. It implements the **Claim-Check pattern** for handling large payloads efficiently.
|
||||
|
||||
### Core Features
|
||||
|
||||
- **Bi-directional communication**: Julia ↔ JavaScript
|
||||
- **Smart transport selection**: Automatic direct vs link transport based on payload size
|
||||
- **Multi-payload support**: Send multiple payloads of different types in a single message
|
||||
- **Claim-check pattern**: Upload large files to HTTP server, send only URLs via NATS
|
||||
- **Type-aware serialization**: Different serialization strategies for different data types
|
||||
|
||||
---
|
||||
|
||||
## Key Concepts
|
||||
|
||||
### 1. msgEnvelope_v1 (Message Envelope)
|
||||
|
||||
The `msgEnvelope_v1` structure provides a comprehensive message format for bidirectional communication:
|
||||
|
||||
```julia
|
||||
struct msgEnvelope_v1
|
||||
correlationId::String # Unique identifier to track messages
|
||||
msgId::String # This message id
|
||||
timestamp::String # Message published timestamp
|
||||
|
||||
sendTo::String # Topic/subject the sender sends to
|
||||
msgPurpose::String # Purpose (ACK | NACK | updateStatus | shutdown | chat)
|
||||
senderName::String # Sender name (e.g., "agent-wine-web-frontend")
|
||||
senderId::String # Sender id (uuid4)
|
||||
receiverName::String # Message receiver name (e.g., "agent-backend")
|
||||
receiverId::String # Message receiver id (uuid4 or nothing for broadcast)
|
||||
replyTo::String # Topic to reply to
|
||||
replyToMsgId::String # Message id this message is replying to
|
||||
brokerURL::String # NATS server address
|
||||
|
||||
metadata::Dict{String, Any}
|
||||
payloads::AbstractArray{msgPayload_v1} # Multiple payloads stored here
|
||||
end
|
||||
```
|
||||
|
||||
### 2. msgPayload_v1 (Payload Structure)
|
||||
|
||||
The `msgPayload_v1` structure provides flexible payload handling:
|
||||
|
||||
```julia
|
||||
struct msgPayload_v1
|
||||
id::String # Id of this payload (e.g., "uuid4")
|
||||
dataname::String # Name of this payload (e.g., "login_image")
|
||||
type::String # "text | dictionary | table | image | audio | video | binary"
|
||||
transport::String # "direct | link"
|
||||
encoding::String # "none | json | base64 | arrow-ipc"
|
||||
size::Integer # Data size in bytes
|
||||
data::Any # Payload data in case of direct transport or a URL in case of link
|
||||
metadata::Dict{String, Any} # Dict("checksum" => "sha256_hash", ...)
|
||||
end
|
||||
```
|
||||
|
||||
### 3. Standard API Format
|
||||
|
||||
The system uses a **standardized list-of-tuples format** for all payload operations:
|
||||
|
||||
```julia
|
||||
# Input format for smartsend (always a list of tuples with type info)
|
||||
[(dataname1, data1, type1), (dataname2, data2, type2), ...]
|
||||
|
||||
# Output format for smartreceive (always returns a list of tuples)
|
||||
[(dataname1, data1, type1), (dataname2, data2, type2), ...]
|
||||
```
|
||||
|
||||
**Important**: Even when sending a single payload, you must wrap it in a list.
|
||||
|
||||
---
|
||||
|
||||
## Installation
|
||||
|
||||
```julia
|
||||
using Pkg
|
||||
Pkg.add("NATS")
|
||||
Pkg.add("JSON")
|
||||
Pkg.add("Arrow")
|
||||
Pkg.add("HTTP")
|
||||
Pkg.add("UUIDs")
|
||||
Pkg.add("Dates")
|
||||
Pkg.add("Base64")
|
||||
Pkg.add("PrettyPrinting")
|
||||
Pkg.add("DataFrames")
|
||||
```
|
||||
|
||||
Then include the NATSBridge module:
|
||||
|
||||
```julia
|
||||
include("NATSBridge.jl")
|
||||
using .NATSBridge
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Basic Usage
|
||||
|
||||
### Sending Data (smartsend)
|
||||
|
||||
```julia
|
||||
using NATSBridge
|
||||
|
||||
# Send a simple dictionary
|
||||
data = Dict("key" => "value")
|
||||
env = NATSBridge.smartsend("my.subject", [("dataname1", data, "dictionary")])
|
||||
```
|
||||
|
||||
### Receiving Data (smartreceive)
|
||||
|
||||
```julia
|
||||
using NATSBridge
|
||||
|
||||
# Subscribe to a NATS subject
|
||||
NATS.subscribe("my.subject") do msg
|
||||
# Process the message
|
||||
result = NATSBridge.smartreceive(
|
||||
msg,
|
||||
max_retries = 5,
|
||||
base_delay = 100,
|
||||
max_delay = 5000
|
||||
)
|
||||
|
||||
# result is a list of (dataname, data, type) tuples
|
||||
for (dataname, data, type) in result
|
||||
println("Received $dataname of type $type")
|
||||
println("Data: $data")
|
||||
end
|
||||
end
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Payload Types
|
||||
|
||||
NATSBridge.jl supports the following payload types:
|
||||
|
||||
| Type | Description | Serialization |
|
||||
|------|-------------|---------------|
|
||||
| `text` | Plain text | UTF-8 encoding |
|
||||
| `dictionary` | JSON-serializable data (Dict, NamedTuple) | JSON |
|
||||
| `table` | Tabular data (DataFrame, array of structs) | Apache Arrow IPC |
|
||||
| `image` | Image data (Bitmap, PNG/JPG bytes) | Binary |
|
||||
| `audio` | Audio data (WAV, MP3 bytes) | Binary |
|
||||
| `video` | Video data (MP4, AVI bytes) | Binary |
|
||||
| `binary` | Generic binary data | Binary |
|
||||
|
||||
---
|
||||
|
||||
## Transport Strategies
|
||||
|
||||
NATSBridge.jl automatically selects the appropriate transport strategy based on payload size:
|
||||
|
||||
### Direct Transport (< 1MB)
|
||||
|
||||
Small payloads are encoded as Base64 and sent directly over NATS.
|
||||
|
||||
```julia
|
||||
# Small data (< 1MB) - uses direct transport
|
||||
small_data = rand(1000) # ~8KB
|
||||
env = NATSBridge.smartsend("small", [("data", small_data, "table")])
|
||||
```
|
||||
|
||||
### Link Transport (≥ 1MB)
|
||||
|
||||
Large payloads are uploaded to an HTTP file server, and only the URL is sent via NATS.
|
||||
|
||||
```julia
|
||||
# Large data (≥ 1MB) - uses link transport
|
||||
large_data = rand(10_000_000) # ~80MB
|
||||
env = NATSBridge.smartsend("large", [("data", large_data, "table")])
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Complete Examples
|
||||
|
||||
### Example 1: Text Message
|
||||
|
||||
**Sender:**
|
||||
```julia
|
||||
using NATSBridge
|
||||
using UUIDs
|
||||
|
||||
const SUBJECT = "/NATSBridge_text_test"
|
||||
const NATS_URL = "nats://localhost:4222"
|
||||
const FILESERVER_URL = "http://localhost:8080"
|
||||
|
||||
function plik_upload_handler(fileserver_url::String, dataname::String, data::Vector{UInt8})::Dict{String, Any}
|
||||
url_getUploadID = "$fileserver_url/upload"
|
||||
headers = ["Content-Type" => "application/json"]
|
||||
body = """{ "OneShot" : true }"""
|
||||
httpResponse = HTTP.request("POST", url_getUploadID, headers, body; body_is_form=false)
|
||||
responseJson = JSON.parse(String(httpResponse.body))
|
||||
uploadid = responseJson["id"]
|
||||
uploadtoken = responseJson["uploadToken"]
|
||||
|
||||
file_multipart = HTTP.Multipart(dataname, IOBuffer(data), "application/octet-stream")
|
||||
url_upload = "$fileserver_url/file/$uploadid"
|
||||
headers = ["X-UploadToken" => uploadtoken]
|
||||
|
||||
form = HTTP.Form(Dict("file" => file_multipart))
|
||||
httpResponse = HTTP.post(url_upload, headers, form)
|
||||
responseJson = JSON.parse(String(httpResponse.body))
|
||||
|
||||
fileid = responseJson["id"]
|
||||
url = "$fileserver_url/file/$uploadid/$fileid/$dataname"
|
||||
|
||||
return Dict("status" => httpResponse.status, "uploadid" => uploadid, "fileid" => fileid, "url" => url)
|
||||
end
|
||||
|
||||
function test_text_send()
|
||||
small_text = "Hello, this is a small text message."
|
||||
large_text = join(["Line $i: " for i in 1:50000], "")
|
||||
|
||||
data1 = ("small_text", small_text, "text")
|
||||
data2 = ("large_text", large_text, "text")
|
||||
|
||||
env = NATSBridge.smartsend(
|
||||
SUBJECT,
|
||||
[data1, data2],
|
||||
nats_url = NATS_URL,
|
||||
fileserver_url = FILESERVER_URL,
|
||||
fileserverUploadHandler = plik_upload_handler,
|
||||
size_threshold = 1_000_000,
|
||||
correlation_id = string(uuid4()),
|
||||
msg_purpose = "chat",
|
||||
sender_name = "text_sender"
|
||||
)
|
||||
end
|
||||
```
|
||||
|
||||
**Receiver:**
|
||||
```julia
|
||||
using NATSBridge
|
||||
|
||||
const SUBJECT = "/NATSBridge_text_test"
|
||||
const NATS_URL = "nats://localhost:4222"
|
||||
|
||||
function test_text_receive()
|
||||
conn = NATS.connect(NATS_URL)
|
||||
NATS.subscribe(conn, SUBJECT) do msg
|
||||
result = NATSBridge.smartreceive(
|
||||
msg,
|
||||
max_retries = 5,
|
||||
base_delay = 100,
|
||||
max_delay = 5000
|
||||
)
|
||||
|
||||
for (dataname, data, data_type) in result
|
||||
if data_type == "text"
|
||||
println("Received text: $data")
|
||||
write("./received_$dataname.txt", data)
|
||||
end
|
||||
end
|
||||
end
|
||||
sleep(120)
|
||||
NATS.drain(conn)
|
||||
end
|
||||
```
|
||||
|
||||
### Example 2: Dictionary (JSON) Message
|
||||
|
||||
**Sender:**
|
||||
```julia
|
||||
using NATSBridge
|
||||
using UUIDs
|
||||
|
||||
const SUBJECT = "/NATSBridge_dict_test"
|
||||
const NATS_URL = "nats://localhost:4222"
|
||||
const FILESERVER_URL = "http://localhost:8080"
|
||||
|
||||
function plik_upload_handler(fileserver_url::String, dataname::String, data::Vector{UInt8})::Dict{String, Any}
|
||||
url_getUploadID = "$fileserver_url/upload"
|
||||
headers = ["Content-Type" => "application/json"]
|
||||
body = """{ "OneShot" : true }"""
|
||||
httpResponse = HTTP.request("POST", url_getUploadID, headers, body; body_is_form=false)
|
||||
responseJson = JSON.parse(String(httpResponse.body))
|
||||
uploadid = responseJson["id"]
|
||||
uploadtoken = responseJson["uploadToken"]
|
||||
|
||||
file_multipart = HTTP.Multipart(dataname, IOBuffer(data), "application/octet-stream")
|
||||
url_upload = "$fileserver_url/file/$uploadid"
|
||||
headers = ["X-UploadToken" => uploadtoken]
|
||||
|
||||
form = HTTP.Form(Dict("file" => file_multipart))
|
||||
httpResponse = HTTP.post(url_upload, headers, form)
|
||||
responseJson = JSON.parse(String(httpResponse.body))
|
||||
|
||||
fileid = responseJson["id"]
|
||||
url = "$fileserver_url/file/$uploadid/$fileid/$dataname"
|
||||
|
||||
return Dict("status" => httpResponse.status, "uploadid" => uploadid, "fileid" => fileid, "url" => url)
|
||||
end
|
||||
|
||||
function test_dict_send()
|
||||
small_dict = Dict("name" => "Alice", "age" => 30)
|
||||
large_dict = Dict("ids" => collect(1:50000), "names" => ["User_$i" for i in 1:50000])
|
||||
|
||||
data1 = ("small_dict", small_dict, "dictionary")
|
||||
data2 = ("large_dict", large_dict, "dictionary")
|
||||
|
||||
env = NATSBridge.smartsend(
|
||||
SUBJECT,
|
||||
[data1, data2],
|
||||
nats_url = NATS_URL,
|
||||
fileserver_url = FILESERVER_URL,
|
||||
fileserverUploadHandler = plik_upload_handler,
|
||||
size_threshold = 1_000_000,
|
||||
correlation_id = string(uuid4()),
|
||||
msg_purpose = "chat"
|
||||
)
|
||||
end
|
||||
```
|
||||
|
||||
**Receiver:**
|
||||
```julia
|
||||
using NATSBridge
|
||||
|
||||
const SUBJECT = "/NATSBridge_dict_test"
|
||||
const NATS_URL = "nats://localhost:4222"
|
||||
|
||||
function test_dict_receive()
|
||||
conn = NATS.connect(NATS_URL)
|
||||
NATS.subscribe(conn, SUBJECT) do msg
|
||||
result = NATSBridge.smartreceive(
|
||||
msg,
|
||||
max_retries = 5,
|
||||
base_delay = 100,
|
||||
max_delay = 5000
|
||||
)
|
||||
|
||||
for (dataname, data, data_type) in result
|
||||
if data_type == "dictionary"
|
||||
println("Received dictionary: $data")
|
||||
write("./received_$dataname.json", JSON.json(data, 2))
|
||||
end
|
||||
end
|
||||
end
|
||||
sleep(120)
|
||||
NATS.drain(conn)
|
||||
end
|
||||
```
|
||||
|
||||
### Example 3: DataFrame (Table) Message
|
||||
|
||||
**Sender:**
|
||||
```julia
|
||||
using NATSBridge
|
||||
using DataFrames
|
||||
using UUIDs
|
||||
|
||||
const SUBJECT = "/NATSBridge_table_test"
|
||||
const NATS_URL = "nats://localhost:4222"
|
||||
const FILESERVER_URL = "http://localhost:8080"
|
||||
|
||||
function plik_upload_handler(fileserver_url::String, dataname::String, data::Vector{UInt8})::Dict{String, Any}
|
||||
url_getUploadID = "$fileserver_url/upload"
|
||||
headers = ["Content-Type" => "application/json"]
|
||||
body = """{ "OneShot" : true }"""
|
||||
httpResponse = HTTP.request("POST", url_getUploadID, headers, body; body_is_form=false)
|
||||
responseJson = JSON.parse(String(httpResponse.body))
|
||||
uploadid = responseJson["id"]
|
||||
uploadtoken = responseJson["uploadToken"]
|
||||
|
||||
file_multipart = HTTP.Multipart(dataname, IOBuffer(data), "application/octet-stream")
|
||||
url_upload = "$fileserver_url/file/$uploadid"
|
||||
headers = ["X-UploadToken" => uploadtoken]
|
||||
|
||||
form = HTTP.Form(Dict("file" => file_multipart))
|
||||
httpResponse = HTTP.post(url_upload, headers, form)
|
||||
responseJson = JSON.parse(String(httpResponse.body))
|
||||
|
||||
fileid = responseJson["id"]
|
||||
url = "$fileserver_url/file/$uploadid/$fileid/$dataname"
|
||||
|
||||
return Dict("status" => httpResponse.status, "uploadid" => uploadid, "fileid" => fileid, "url" => url)
|
||||
end
|
||||
|
||||
function test_table_send()
|
||||
small_df = DataFrame(id = 1:10, name = ["Alice", "Bob", "Charlie"], score = [95, 88, 92])
|
||||
large_df = DataFrame(id = 1:50000, name = ["User_$i" for i in 1:50000], score = rand(1:100, 50000))
|
||||
|
||||
data1 = ("small_table", small_df, "table")
|
||||
data2 = ("large_table", large_df, "table")
|
||||
|
||||
env = NATSBridge.smartsend(
|
||||
SUBJECT,
|
||||
[data1, data2],
|
||||
nats_url = NATS_URL,
|
||||
fileserver_url = FILESERVER_URL,
|
||||
fileserverUploadHandler = plik_upload_handler,
|
||||
size_threshold = 1_000_000,
|
||||
correlation_id = string(uuid4()),
|
||||
msg_purpose = "chat"
|
||||
)
|
||||
end
|
||||
```
|
||||
|
||||
**Receiver:**
|
||||
```julia
|
||||
using NATSBridge
|
||||
using DataFrames
|
||||
|
||||
const SUBJECT = "/NATSBridge_table_test"
|
||||
const NATS_URL = "nats://localhost:4222"
|
||||
|
||||
function test_table_receive()
|
||||
conn = NATS.connect(NATS_URL)
|
||||
NATS.subscribe(conn, SUBJECT) do msg
|
||||
result = NATSBridge.smartreceive(
|
||||
msg,
|
||||
max_retries = 5,
|
||||
base_delay = 100,
|
||||
max_delay = 5000
|
||||
)
|
||||
|
||||
for (dataname, data, data_type) in result
|
||||
if data_type == "table"
|
||||
data = DataFrame(data)
|
||||
println("Received DataFrame with $(size(data, 1)) rows")
|
||||
display(data[1:min(5, size(data, 1)), :])
|
||||
end
|
||||
end
|
||||
end
|
||||
sleep(120)
|
||||
NATS.drain(conn)
|
||||
end
|
||||
```
|
||||
|
||||
### Example 4: Mixed Content (Chat with Text, Image, Audio)
|
||||
|
||||
**Sender:**
|
||||
```julia
|
||||
using NATSBridge
|
||||
using DataFrames
|
||||
using UUIDs
|
||||
|
||||
const SUBJECT = "/NATSBridge_mix_test"
|
||||
const NATS_URL = "nats://localhost:4222"
|
||||
const FILESERVER_URL = "http://localhost:8080"
|
||||
|
||||
function plik_upload_handler(fileserver_url::String, dataname::String, data::Vector{UInt8})::Dict{String, Any}
|
||||
url_getUploadID = "$fileserver_url/upload"
|
||||
headers = ["Content-Type" => "application/json"]
|
||||
body = """{ "OneShot" : true }"""
|
||||
httpResponse = HTTP.request("POST", url_getUploadID, headers, body; body_is_form=false)
|
||||
responseJson = JSON.parse(String(httpResponse.body))
|
||||
uploadid = responseJson["id"]
|
||||
uploadtoken = responseJson["uploadToken"]
|
||||
|
||||
file_multipart = HTTP.Multipart(dataname, IOBuffer(data), "application/octet-stream")
|
||||
url_upload = "$fileserver_url/file/$uploadid"
|
||||
headers = ["X-UploadToken" => uploadtoken]
|
||||
|
||||
form = HTTP.Form(Dict("file" => file_multipart))
|
||||
httpResponse = HTTP.post(url_upload, headers, form)
|
||||
responseJson = JSON.parse(String(httpResponse.body))
|
||||
|
||||
fileid = responseJson["id"]
|
||||
url = "$fileserver_url/file/$uploadid/$fileid/$dataname"
|
||||
|
||||
return Dict("status" => httpResponse.status, "uploadid" => uploadid, "fileid" => fileid, "url" => url)
|
||||
end
|
||||
|
||||
function test_mix_send()
|
||||
# Text data
|
||||
text_data = "Hello! This is a test chat message. 🎉"
|
||||
|
||||
# Dictionary data
|
||||
dict_data = Dict("type" => "chat", "sender" => "serviceA")
|
||||
|
||||
# Small table data
|
||||
table_data_small = DataFrame(id = 1:10, name = ["msg_$i" for i in 1:10])
|
||||
|
||||
# Large table data (link transport)
|
||||
table_data_large = DataFrame(id = 1:150_000, name = ["msg_$i" for i in 1:150_000])
|
||||
|
||||
# Small image data (direct transport)
|
||||
image_data = UInt8[rand(1:255) for _ in 1:100]
|
||||
|
||||
# Large image data (link transport)
|
||||
large_image_data = UInt8[rand(1:255) for _ in 1:1_500_000]
|
||||
|
||||
# Small audio data (direct transport)
|
||||
audio_data = UInt8[rand(1:255) for _ in 1:100]
|
||||
|
||||
# Large audio data (link transport)
|
||||
large_audio_data = UInt8[rand(1:255) for _ in 1:1_500_000]
|
||||
|
||||
# Small video data (direct transport)
|
||||
video_data = UInt8[rand(1:255) for _ in 1:150]
|
||||
|
||||
# Large video data (link transport)
|
||||
large_video_data = UInt8[rand(1:255) for _ in 1:1_500_000]
|
||||
|
||||
# Small binary data (direct transport)
|
||||
binary_data = UInt8[rand(1:255) for _ in 1:200]
|
||||
|
||||
# Large binary data (link transport)
|
||||
large_binary_data = UInt8[rand(1:255) for _ in 1:1_500_000]
|
||||
|
||||
# Create payloads list - mixed content
|
||||
payloads = [
|
||||
# Small data (direct transport)
|
||||
("chat_text", text_data, "text"),
|
||||
("chat_json", dict_data, "dictionary"),
|
||||
("chat_table_small", table_data_small, "table"),
|
||||
|
||||
# Large data (link transport)
|
||||
("chat_table_large", table_data_large, "table"),
|
||||
("user_image_large", large_image_data, "image"),
|
||||
("audio_clip_large", large_audio_data, "audio"),
|
||||
("video_clip_large", large_video_data, "video"),
|
||||
("binary_file_large", large_binary_data, "binary")
|
||||
]
|
||||
|
||||
env = NATSBridge.smartsend(
|
||||
SUBJECT,
|
||||
payloads,
|
||||
nats_url = NATS_URL,
|
||||
fileserver_url = FILESERVER_URL,
|
||||
fileserverUploadHandler = plik_upload_handler,
|
||||
size_threshold = 1_000_000,
|
||||
correlation_id = string(uuid4()),
|
||||
msg_purpose = "chat",
|
||||
sender_name = "mix_sender"
|
||||
)
|
||||
end
|
||||
```
|
||||
|
||||
**Receiver:**
|
||||
```julia
|
||||
using NATSBridge
|
||||
using DataFrames
|
||||
|
||||
const SUBJECT = "/NATSBridge_mix_test"
|
||||
const NATS_URL = "nats://localhost:4222"
|
||||
|
||||
function test_mix_receive()
|
||||
conn = NATS.connect(NATS_URL)
|
||||
NATS.subscribe(conn, SUBJECT) do msg
|
||||
result = NATSBridge.smartreceive(
|
||||
msg,
|
||||
max_retries = 5,
|
||||
base_delay = 100,
|
||||
max_delay = 5000
|
||||
)
|
||||
|
||||
println("Received $(length(result)) payloads")
|
||||
|
||||
for (dataname, data, data_type) in result
|
||||
println("\n=== Payload: $dataname (type: $data_type) ===")
|
||||
|
||||
if data_type == "text"
|
||||
println(" Type: String")
|
||||
println(" Length: $(length(data)) characters")
|
||||
|
||||
elseif data_type == "dictionary"
|
||||
println(" Type: JSON Object")
|
||||
println(" Keys: $(keys(data))")
|
||||
|
||||
elseif data_type == "table"
|
||||
data = DataFrame(data)
|
||||
println(" Type: DataFrame")
|
||||
println(" Dimensions: $(size(data, 1)) rows x $(size(data, 2)) columns")
|
||||
|
||||
elseif data_type == "image"
|
||||
println(" Type: Vector{UInt8}")
|
||||
println(" Size: $(length(data)) bytes")
|
||||
write("./received_$dataname.bin", data)
|
||||
|
||||
elseif data_type == "audio"
|
||||
println(" Type: Vector{UInt8}")
|
||||
println(" Size: $(length(data)) bytes")
|
||||
write("./received_$dataname.bin", data)
|
||||
|
||||
elseif data_type == "video"
|
||||
println(" Type: Vector{UInt8}")
|
||||
println(" Size: $(length(data)) bytes")
|
||||
write("./received_$dataname.bin", data)
|
||||
|
||||
elseif data_type == "binary"
|
||||
println(" Type: Vector{UInt8}")
|
||||
println(" Size: $(length(data)) bytes")
|
||||
write("./received_$dataname.bin", data)
|
||||
end
|
||||
end
|
||||
end
|
||||
sleep(120)
|
||||
NATS.drain(conn)
|
||||
end
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Always wrap payloads in a list** - Even for single payloads: `[("dataname", data, "type")]`
|
||||
2. **Use appropriate transport** - Let NATSBridge handle size-based routing (default 1MB threshold)
|
||||
3. **Customize size threshold** - Use `size_threshold` parameter to adjust the direct/link split
|
||||
4. **Provide fileserver handler** - Implement `fileserverUploadHandler` for link transport
|
||||
5. **Include correlation IDs** - Track messages across distributed systems
|
||||
6. **Handle errors** - Implement proper error handling for network failures
|
||||
7. **Close connections** - Ensure NATS connections are properly closed using `NATS.drain()`
|
||||
|
||||
---
|
||||
|
||||
## Conclusion
|
||||
|
||||
NATSBridge.jl provides a powerful abstraction for bi-directional communication between Julia and JavaScript services. By understanding the key concepts and following the best practices, you can build robust, scalable applications that leverage the full power of NATS messaging.
|
||||
|
||||
For more information, see:
|
||||
- [`docs/architecture.md`](./architecture.md) - Detailed architecture documentation
|
||||
- [`docs/implementation.md`](./implementation.md) - Implementation details
|
||||
939
walkthrough_julia.md
Normal file
939
walkthrough_julia.md
Normal file
@@ -0,0 +1,939 @@
|
||||
# NATSBridge.jl Walkthrough: Building a Chat System
|
||||
|
||||
A step-by-step guided walkthrough for building a real-time chat system using NATSBridge.jl with mixed content support (text, images, audio, video, and files).
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Julia 1.7+
|
||||
- NATS server running
|
||||
- HTTP file server (Plik) running
|
||||
|
||||
## Step 1: Understanding the Chat System Architecture
|
||||
|
||||
### System Components
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────────────┐
|
||||
│ Chat System │
|
||||
├─────────────────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ ┌──────────────┐ NATS ┌──────────────┐ │
|
||||
│ │ Julia │◄───────┬───────► │ JavaScript │ │
|
||||
│ │ Service │ │ │ Client │ │
|
||||
│ │ │ │ │ │ │
|
||||
│ │ - Text │ │ │ - Text │ │
|
||||
│ │ - Images │ │ │ - Images │ │
|
||||
│ │ - Audio │ ▼ │ - Audio │ │
|
||||
│ │ - Video │ NATSBridge.jl │ - Files │ │
|
||||
│ │ - Files │ │ │ - Tables │ │
|
||||
│ └──────────────┘ │ └──────────────┘ │
|
||||
│ │ │
|
||||
│ ┌───────┴───────┐ │
|
||||
│ │ NATS │ │
|
||||
│ │ Server │ │
|
||||
│ └─────────────┘ │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────────────────┘
|
||||
|
||||
For large payloads (> 1MB):
|
||||
┌─────────────────────────────────────────────────────────────────────────────┐
|
||||
│ File Server (Plik) │
|
||||
│ │
|
||||
│ Julia Service ──► Upload ──► File Server ──► Download ◄── JavaScript Client│
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Message Format
|
||||
|
||||
Each chat message is an envelope containing multiple payloads:
|
||||
|
||||
```json
|
||||
{
|
||||
"correlationId": "uuid4",
|
||||
"msgId": "uuid4",
|
||||
"timestamp": "2024-01-15T10:30:00Z",
|
||||
"sendTo": "/chat/room1",
|
||||
"msgPurpose": "chat",
|
||||
"senderName": "user-1",
|
||||
"senderId": "uuid4",
|
||||
"receiverName": "user-2",
|
||||
"receiverId": "uuid4",
|
||||
"brokerURL": "nats://localhost:4222",
|
||||
"payloads": [
|
||||
{
|
||||
"id": "uuid4",
|
||||
"dataname": "message_text",
|
||||
"type": "text",
|
||||
"transport": "direct",
|
||||
"encoding": "base64",
|
||||
"size": 256,
|
||||
"data": "SGVsbG8gV29ybGQh",
|
||||
"metadata": {}
|
||||
},
|
||||
{
|
||||
"id": "uuid4",
|
||||
"dataname": "user_image",
|
||||
"type": "image",
|
||||
"transport": "link",
|
||||
"encoding": "none",
|
||||
"size": 15433,
|
||||
"data": "http://localhost:8080/file/UPLOAD_ID/FILE_ID/image.jpg",
|
||||
"metadata": {}
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Step 2: Setting Up the Environment
|
||||
|
||||
### 1. Start NATS Server
|
||||
|
||||
```bash
|
||||
# Using Docker
|
||||
docker run -d -p 4222:4222 -p 8222:8222 --name nats-server nats:latest
|
||||
|
||||
# Or download from https://github.com/nats-io/nats-server/releases
|
||||
./nats-server
|
||||
```
|
||||
|
||||
### 2. Start HTTP File Server (Plik)
|
||||
|
||||
```bash
|
||||
# Using Docker
|
||||
docker run -d -p 8080:8080 --name plik plik/plik:latest
|
||||
|
||||
# Or download from https://github.com/arnaud-lb/plik/releases
|
||||
./plikd -d
|
||||
```
|
||||
|
||||
### 3. Install Julia Dependencies
|
||||
|
||||
```julia
|
||||
using Pkg
|
||||
Pkg.add("NATS")
|
||||
Pkg.add("JSON")
|
||||
Pkg.add("Arrow")
|
||||
Pkg.add("HTTP")
|
||||
Pkg.add("UUIDs")
|
||||
Pkg.add("Dates")
|
||||
Pkg.add("Base64")
|
||||
Pkg.add("PrettyPrinting")
|
||||
Pkg.add("DataFrames")
|
||||
```
|
||||
|
||||
## Step 3: Basic Text-Only Chat
|
||||
|
||||
### Sender (User 1)
|
||||
|
||||
```julia
|
||||
using NATS
|
||||
using JSON
|
||||
using UUIDs
|
||||
using Dates
|
||||
using PrettyPrinting
|
||||
using DataFrames
|
||||
using Arrow
|
||||
using HTTP
|
||||
using Base64
|
||||
|
||||
# Include the bridge module
|
||||
include("NATSBridge.jl")
|
||||
using .NATSBridge
|
||||
|
||||
# Configuration
|
||||
const NATS_URL = "nats://localhost:4222"
|
||||
const FILESERVER_URL = "http://localhost:8080"
|
||||
const SUBJECT = "/chat/room1"
|
||||
|
||||
# File upload handler for plik server
|
||||
function plik_upload_handler(fileserver_url::String, dataname::String, data::Vector{UInt8})::Dict{String, Any}
|
||||
url_getUploadID = "$fileserver_url/upload"
|
||||
headers = ["Content-Type" => "application/json"]
|
||||
body = """{ "OneShot" : true }"""
|
||||
httpResponse = HTTP.request("POST", url_getUploadID, headers, body; body_is_form=false)
|
||||
responseJson = JSON.parse(String(httpResponse.body))
|
||||
uploadid = responseJson["id"]
|
||||
uploadtoken = responseJson["uploadToken"]
|
||||
|
||||
file_multipart = HTTP.Multipart(dataname, IOBuffer(data), "application/octet-stream")
|
||||
url_upload = "$fileserver_url/file/$uploadid"
|
||||
headers = ["X-UploadToken" => uploadtoken]
|
||||
|
||||
form = HTTP.Form(Dict("file" => file_multipart))
|
||||
httpResponse = HTTP.post(url_upload, headers, form)
|
||||
responseJson = JSON.parse(String(httpResponse.body))
|
||||
|
||||
fileid = responseJson["id"]
|
||||
url = "$fileserver_url/file/$uploadid/$fileid/$dataname"
|
||||
|
||||
return Dict("status" => httpResponse.status, "uploadid" => uploadid, "fileid" => fileid, "url" => url)
|
||||
end
|
||||
|
||||
# Send a simple text message
|
||||
function send_text_message()
|
||||
message_text = "Hello, how are you today?"
|
||||
|
||||
env = NATSBridge.smartsend(
|
||||
SUBJECT,
|
||||
[("message", message_text, "text")],
|
||||
nats_url = NATS_URL,
|
||||
fileserver_url = FILESERVER_URL,
|
||||
fileserverUploadHandler = plik_upload_handler,
|
||||
size_threshold = 1_000_000,
|
||||
correlation_id = string(uuid4()),
|
||||
msg_purpose = "chat",
|
||||
sender_name = "user-1"
|
||||
)
|
||||
|
||||
println("Sent text message with correlation ID: $(env.correlationId)")
|
||||
end
|
||||
|
||||
send_text_message()
|
||||
```
|
||||
|
||||
### Receiver (User 2)
|
||||
|
||||
```julia
|
||||
using NATS
|
||||
using JSON
|
||||
using UUIDs
|
||||
using Dates
|
||||
using PrettyPrinting
|
||||
using DataFrames
|
||||
using Arrow
|
||||
using HTTP
|
||||
using Base64
|
||||
|
||||
# Include the bridge module
|
||||
include("NATSBridge.jl")
|
||||
using .NATSBridge
|
||||
|
||||
# Configuration
|
||||
const NATS_URL = "nats://localhost:4222"
|
||||
const SUBJECT = "/chat/room1"
|
||||
|
||||
# Message handler
|
||||
function message_handler(msg::NATS.Msg)
|
||||
payloads = NATSBridge.smartreceive(
|
||||
msg,
|
||||
max_retries = 5,
|
||||
base_delay = 100,
|
||||
max_delay = 5000
|
||||
)
|
||||
|
||||
# Extract the text message
|
||||
for (dataname, data, data_type) in payloads
|
||||
if data_type == "text"
|
||||
println("Received message: $data")
|
||||
# Save to file
|
||||
write("./received_$dataname.txt", data)
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
# Subscribe to the chat room
|
||||
NATS.subscribe(SUBJECT) do msg
|
||||
message_handler(msg)
|
||||
end
|
||||
|
||||
# Keep the program running
|
||||
while true
|
||||
sleep(1)
|
||||
end
|
||||
```
|
||||
|
||||
## Step 4: Adding Image Support
|
||||
|
||||
### Sending an Image
|
||||
|
||||
```julia
|
||||
using NATS
|
||||
using JSON
|
||||
using UUIDs
|
||||
using Dates
|
||||
using PrettyPrinting
|
||||
using DataFrames
|
||||
using Arrow
|
||||
using HTTP
|
||||
using Base64
|
||||
|
||||
include("NATSBridge.jl")
|
||||
using .NATSBridge
|
||||
|
||||
const NATS_URL = "nats://localhost:4222"
|
||||
const FILESERVER_URL = "http://localhost:8080"
|
||||
const SUBJECT = "/chat/room1"
|
||||
|
||||
function plik_upload_handler(fileserver_url::String, dataname::String, data::Vector{UInt8})::Dict{String, Any}
|
||||
url_getUploadID = "$fileserver_url/upload"
|
||||
headers = ["Content-Type" => "application/json"]
|
||||
body = """{ "OneShot" : true }"""
|
||||
httpResponse = HTTP.request("POST", url_getUploadID, headers, body; body_is_form=false)
|
||||
responseJson = JSON.parse(String(httpResponse.body))
|
||||
uploadid = responseJson["id"]
|
||||
uploadtoken = responseJson["uploadToken"]
|
||||
|
||||
file_multipart = HTTP.Multipart(dataname, IOBuffer(data), "application/octet-stream")
|
||||
url_upload = "$fileserver_url/file/$uploadid"
|
||||
headers = ["X-UploadToken" => uploadtoken]
|
||||
|
||||
form = HTTP.Form(Dict("file" => file_multipart))
|
||||
httpResponse = HTTP.post(url_upload, headers, form)
|
||||
responseJson = JSON.parse(String(httpResponse.body))
|
||||
|
||||
fileid = responseJson["id"]
|
||||
url = "$fileserver_url/file/$uploadid/$fileid/$dataname"
|
||||
|
||||
return Dict("status" => httpResponse.status, "uploadid" => uploadid, "fileid" => fileid, "url" => url)
|
||||
end
|
||||
|
||||
function send_image()
|
||||
# Read image file
|
||||
image_data = read("screenshot.png", Vector{UInt8})
|
||||
|
||||
# Send with text message
|
||||
env = NATSBridge.smartsend(
|
||||
SUBJECT,
|
||||
[
|
||||
("text", "Check out this screenshot!", "text"),
|
||||
("screenshot", image_data, "image")
|
||||
],
|
||||
nats_url = NATS_URL,
|
||||
fileserver_url = FILESERVER_URL,
|
||||
fileserverUploadHandler = plik_upload_handler,
|
||||
size_threshold = 1_000_000,
|
||||
correlation_id = string(uuid4()),
|
||||
msg_purpose = "chat",
|
||||
sender_name = "user-1"
|
||||
)
|
||||
|
||||
println("Sent image with correlation ID: $(env.correlationId)")
|
||||
end
|
||||
|
||||
send_image()
|
||||
```
|
||||
|
||||
### Receiving an Image
|
||||
|
||||
```julia
|
||||
using NATS
|
||||
using JSON
|
||||
using UUIDs
|
||||
using Dates
|
||||
using PrettyPrinting
|
||||
using DataFrames
|
||||
using Arrow
|
||||
using HTTP
|
||||
using Base64
|
||||
|
||||
include("NATSBridge.jl")
|
||||
using .NATSBridge
|
||||
|
||||
const NATS_URL = "nats://localhost:4222"
|
||||
const SUBJECT = "/chat/room1"
|
||||
|
||||
function message_handler(msg::NATS.Msg)
|
||||
payloads = NATSBridge.smartreceive(
|
||||
msg,
|
||||
max_retries = 5,
|
||||
base_delay = 100,
|
||||
max_delay = 5000
|
||||
)
|
||||
|
||||
for (dataname, data, data_type) in payloads
|
||||
if data_type == "text"
|
||||
println("Text: $data")
|
||||
elseif data_type == "image"
|
||||
# Save image to file
|
||||
filename = "received_$dataname.bin"
|
||||
write(filename, data)
|
||||
println("Saved image: $filename")
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
NATS.subscribe(SUBJECT) do msg
|
||||
message_handler(msg)
|
||||
end
|
||||
```
|
||||
|
||||
## Step 5: Handling Large Files with Link Transport
|
||||
|
||||
### Automatic Transport Selection
|
||||
|
||||
```julia
|
||||
using NATS
|
||||
using JSON
|
||||
using UUIDs
|
||||
using Dates
|
||||
using PrettyPrinting
|
||||
using DataFrames
|
||||
using Arrow
|
||||
using HTTP
|
||||
using Base64
|
||||
|
||||
include("NATSBridge.jl")
|
||||
using .NATSBridge
|
||||
|
||||
const NATS_URL = "nats://localhost:4222"
|
||||
const FILESERVER_URL = "http://localhost:8080"
|
||||
const SUBJECT = "/chat/room1"
|
||||
|
||||
function plik_upload_handler(fileserver_url::String, dataname::String, data::Vector{UInt8})::Dict{String, Any}
|
||||
url_getUploadID = "$fileserver_url/upload"
|
||||
headers = ["Content-Type" => "application/json"]
|
||||
body = """{ "OneShot" : true }"""
|
||||
httpResponse = HTTP.request("POST", url_getUploadID, headers, body; body_is_form=false)
|
||||
responseJson = JSON.parse(String(httpResponse.body))
|
||||
uploadid = responseJson["id"]
|
||||
uploadtoken = responseJson["uploadToken"]
|
||||
|
||||
file_multipart = HTTP.Multipart(dataname, IOBuffer(data), "application/octet-stream")
|
||||
url_upload = "$fileserver_url/file/$uploadid"
|
||||
headers = ["X-UploadToken" => uploadtoken]
|
||||
|
||||
form = HTTP.Form(Dict("file" => file_multipart))
|
||||
httpResponse = HTTP.post(url_upload, headers, form)
|
||||
responseJson = JSON.parse(String(httpResponse.body))
|
||||
|
||||
fileid = responseJson["id"]
|
||||
url = "$fileserver_url/file/$uploadid/$fileid/$dataname"
|
||||
|
||||
return Dict("status" => httpResponse.status, "uploadid" => uploadid, "fileid" => fileid, "url" => url)
|
||||
end
|
||||
|
||||
function send_large_file()
|
||||
# Create a large file (> 1MB triggers link transport)
|
||||
large_data = rand(10_000_000) # ~80MB
|
||||
|
||||
env = NATSBridge.smartsend(
|
||||
SUBJECT,
|
||||
[("large_file", large_data, "binary")],
|
||||
nats_url = NATS_URL,
|
||||
fileserver_url = FILESERVER_URL,
|
||||
fileserverUploadHandler = plik_upload_handler,
|
||||
size_threshold = 1_000_000,
|
||||
correlation_id = string(uuid4()),
|
||||
msg_purpose = "chat",
|
||||
sender_name = "user-1"
|
||||
)
|
||||
|
||||
println("Uploaded large file to: $(env.payloads[1].data)")
|
||||
println("Correlation ID: $(env.correlationId)")
|
||||
end
|
||||
|
||||
send_large_file()
|
||||
```
|
||||
|
||||
## Step 6: Audio and Video Support
|
||||
|
||||
### Sending Audio
|
||||
|
||||
```julia
|
||||
using NATS
|
||||
using JSON
|
||||
using UUIDs
|
||||
using Dates
|
||||
using PrettyPrinting
|
||||
using DataFrames
|
||||
using Arrow
|
||||
using HTTP
|
||||
using Base64
|
||||
|
||||
include("NATSBridge.jl")
|
||||
using .NATSBridge
|
||||
|
||||
const NATS_URL = "nats://localhost:4222"
|
||||
const FILESERVER_URL = "http://localhost:8080"
|
||||
const SUBJECT = "/chat/room1"
|
||||
|
||||
function plik_upload_handler(fileserver_url::String, dataname::String, data::Vector{UInt8})::Dict{String, Any}
|
||||
url_getUploadID = "$fileserver_url/upload"
|
||||
headers = ["Content-Type" => "application/json"]
|
||||
body = """{ "OneShot" : true }"""
|
||||
httpResponse = HTTP.request("POST", url_getUploadID, headers, body; body_is_form=false)
|
||||
responseJson = JSON.parse(String(httpResponse.body))
|
||||
uploadid = responseJson["id"]
|
||||
uploadtoken = responseJson["uploadToken"]
|
||||
|
||||
file_multipart = HTTP.Multipart(dataname, IOBuffer(data), "application/octet-stream")
|
||||
url_upload = "$fileserver_url/file/$uploadid"
|
||||
headers = ["X-UploadToken" => uploadtoken]
|
||||
|
||||
form = HTTP.Form(Dict("file" => file_multipart))
|
||||
httpResponse = HTTP.post(url_upload, headers, form)
|
||||
responseJson = JSON.parse(String(httpResponse.body))
|
||||
|
||||
fileid = responseJson["id"]
|
||||
url = "$fileserver_url/file/$uploadid/$fileid/$dataname"
|
||||
|
||||
return Dict("status" => httpResponse.status, "uploadid" => uploadid, "fileid" => fileid, "url" => url)
|
||||
end
|
||||
|
||||
function send_audio()
|
||||
# Read audio file (WAV, MP3, etc.)
|
||||
audio_data = read("voice_message.mp3", Vector{UInt8})
|
||||
|
||||
env = NATSBridge.smartsend(
|
||||
SUBJECT,
|
||||
[("voice_message", audio_data, "audio")],
|
||||
nats_url = NATS_URL,
|
||||
fileserver_url = FILESERVER_URL,
|
||||
fileserverUploadHandler = plik_upload_handler,
|
||||
size_threshold = 1_000_000,
|
||||
correlation_id = string(uuid4()),
|
||||
msg_purpose = "chat",
|
||||
sender_name = "user-1"
|
||||
)
|
||||
|
||||
println("Sent audio message: $(env.correlationId)")
|
||||
end
|
||||
|
||||
send_audio()
|
||||
```
|
||||
|
||||
### Sending Video
|
||||
|
||||
```julia
|
||||
using NATS
|
||||
using JSON
|
||||
using UUIDs
|
||||
using Dates
|
||||
using PrettyPrinting
|
||||
using DataFrames
|
||||
using Arrow
|
||||
using HTTP
|
||||
using Base64
|
||||
|
||||
include("NATSBridge.jl")
|
||||
using .NATSBridge
|
||||
|
||||
const NATS_URL = "nats://localhost:4222"
|
||||
const FILESERVER_URL = "http://localhost:8080"
|
||||
const SUBJECT = "/chat/room1"
|
||||
|
||||
function plik_upload_handler(fileserver_url::String, dataname::String, data::Vector{UInt8})::Dict{String, Any}
|
||||
url_getUploadID = "$fileserver_url/upload"
|
||||
headers = ["Content-Type" => "application/json"]
|
||||
body = """{ "OneShot" : true }"""
|
||||
httpResponse = HTTP.request("POST", url_getUploadID, headers, body; body_is_form=false)
|
||||
responseJson = JSON.parse(String(httpResponse.body))
|
||||
uploadid = responseJson["id"]
|
||||
uploadtoken = responseJson["uploadToken"]
|
||||
|
||||
file_multipart = HTTP.Multipart(dataname, IOBuffer(data), "application/octet-stream")
|
||||
url_upload = "$fileserver_url/file/$uploadid"
|
||||
headers = ["X-UploadToken" => uploadtoken]
|
||||
|
||||
form = HTTP.Form(Dict("file" => file_multipart))
|
||||
httpResponse = HTTP.post(url_upload, headers, form)
|
||||
responseJson = JSON.parse(String(httpResponse.body))
|
||||
|
||||
fileid = responseJson["id"]
|
||||
url = "$fileserver_url/file/$uploadid/$fileid/$dataname"
|
||||
|
||||
return Dict("status" => httpResponse.status, "uploadid" => uploadid, "fileid" => fileid, "url" => url)
|
||||
end
|
||||
|
||||
function send_video()
|
||||
# Read video file (MP4, AVI, etc.)
|
||||
video_data = read("video_message.mp4", Vector{UInt8})
|
||||
|
||||
env = NATSBridge.smartsend(
|
||||
SUBJECT,
|
||||
[("video_message", video_data, "video")],
|
||||
nats_url = NATS_URL,
|
||||
fileserver_url = FILESERVER_URL,
|
||||
fileserverUploadHandler = plik_upload_handler,
|
||||
size_threshold = 1_000_000,
|
||||
correlation_id = string(uuid4()),
|
||||
msg_purpose = "chat",
|
||||
sender_name = "user-1"
|
||||
)
|
||||
|
||||
println("Sent video message: $(env.correlationId)")
|
||||
end
|
||||
|
||||
send_video()
|
||||
```
|
||||
|
||||
## Step 7: Table/Data Exchange
|
||||
|
||||
### Sending Tabular Data
|
||||
|
||||
```julia
|
||||
using NATS
|
||||
using JSON
|
||||
using UUIDs
|
||||
using Dates
|
||||
using PrettyPrinting
|
||||
using DataFrames
|
||||
using Arrow
|
||||
using HTTP
|
||||
using Base64
|
||||
|
||||
include("NATSBridge.jl")
|
||||
using .NATSBridge
|
||||
|
||||
const NATS_URL = "nats://localhost:4222"
|
||||
const FILESERVER_URL = "http://localhost:8080"
|
||||
const SUBJECT = "/chat/room1"
|
||||
|
||||
function plik_upload_handler(fileserver_url::String, dataname::String, data::Vector{UInt8})::Dict{String, Any}
|
||||
url_getUploadID = "$fileserver_url/upload"
|
||||
headers = ["Content-Type" => "application/json"]
|
||||
body = """{ "OneShot" : true }"""
|
||||
httpResponse = HTTP.request("POST", url_getUploadID, headers, body; body_is_form=false)
|
||||
responseJson = JSON.parse(String(httpResponse.body))
|
||||
uploadid = responseJson["id"]
|
||||
uploadtoken = responseJson["uploadToken"]
|
||||
|
||||
file_multipart = HTTP.Multipart(dataname, IOBuffer(data), "application/octet-stream")
|
||||
url_upload = "$fileserver_url/file/$uploadid"
|
||||
headers = ["X-UploadToken" => uploadtoken]
|
||||
|
||||
form = HTTP.Form(Dict("file" => file_multipart))
|
||||
httpResponse = HTTP.post(url_upload, headers, form)
|
||||
responseJson = JSON.parse(String(httpResponse.body))
|
||||
|
||||
fileid = responseJson["id"]
|
||||
url = "$fileserver_url/file/$uploadid/$fileid/$dataname"
|
||||
|
||||
return Dict("status" => httpResponse.status, "uploadid" => uploadid, "fileid" => fileid, "url" => url)
|
||||
end
|
||||
|
||||
function send_table()
|
||||
# Create a DataFrame
|
||||
df = DataFrame(
|
||||
id = 1:5,
|
||||
name = ["Alice", "Bob", "Charlie", "Diana", "Eve"],
|
||||
score = [95, 88, 92, 98, 85],
|
||||
grade = ['A', 'B', 'A', 'B', 'B']
|
||||
)
|
||||
|
||||
env = NATSBridge.smartsend(
|
||||
SUBJECT,
|
||||
[("student_scores", df, "table")],
|
||||
nats_url = NATS_URL,
|
||||
fileserver_url = FILESERVER_URL,
|
||||
fileserverUploadHandler = plik_upload_handler,
|
||||
size_threshold = 1_000_000,
|
||||
correlation_id = string(uuid4()),
|
||||
msg_purpose = "chat",
|
||||
sender_name = "user-1"
|
||||
)
|
||||
|
||||
println("Sent table with $(nrow(df)) rows")
|
||||
end
|
||||
|
||||
send_table()
|
||||
```
|
||||
|
||||
### Receiving and Using Tables
|
||||
|
||||
```julia
|
||||
using NATS
|
||||
using JSON
|
||||
using UUIDs
|
||||
using Dates
|
||||
using PrettyPrinting
|
||||
using DataFrames
|
||||
using Arrow
|
||||
using HTTP
|
||||
using Base64
|
||||
|
||||
include("NATSBridge.jl")
|
||||
using .NATSBridge
|
||||
|
||||
const NATS_URL = "nats://localhost:4222"
|
||||
const SUBJECT = "/chat/room1"
|
||||
|
||||
function message_handler(msg::NATS.Msg)
|
||||
payloads = NATSBridge.smartreceive(
|
||||
msg,
|
||||
max_retries = 5,
|
||||
base_delay = 100,
|
||||
max_delay = 5000
|
||||
)
|
||||
|
||||
for (dataname, data, data_type) in payloads
|
||||
if data_type == "table"
|
||||
data = DataFrame(data)
|
||||
println("Received table:")
|
||||
show(data)
|
||||
println("\nAverage score: $(mean(data.score))")
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
NATS.subscribe(SUBJECT) do msg
|
||||
message_handler(msg)
|
||||
end
|
||||
```
|
||||
|
||||
## Step 8: Bidirectional Communication
|
||||
|
||||
### Request-Response Pattern
|
||||
|
||||
```julia
|
||||
using NATS
|
||||
using JSON
|
||||
using UUIDs
|
||||
using Dates
|
||||
using PrettyPrinting
|
||||
using DataFrames
|
||||
using Arrow
|
||||
using HTTP
|
||||
using Base64
|
||||
|
||||
include("NATSBridge.jl")
|
||||
using .NATSBridge
|
||||
|
||||
const NATS_URL = "nats://localhost:4222"
|
||||
const SUBJECT = "/api/query"
|
||||
const REPLY_SUBJECT = "/api/response"
|
||||
|
||||
# Request
|
||||
function send_request()
|
||||
query_data = Dict("query" => "SELECT * FROM users")
|
||||
|
||||
env = NATSBridge.smartsend(
|
||||
SUBJECT,
|
||||
[("sql_query", query_data, "dictionary")],
|
||||
nats_url = NATS_URL,
|
||||
fileserver_url = "http://localhost:8080",
|
||||
fileserverUploadHandler = plik_upload_handler,
|
||||
size_threshold = 1_000_000,
|
||||
correlation_id = string(uuid4()),
|
||||
msg_purpose = "request",
|
||||
sender_name = "frontend",
|
||||
receiver_name = "backend",
|
||||
reply_to = REPLY_SUBJECT,
|
||||
reply_to_msg_id = string(uuid4())
|
||||
)
|
||||
|
||||
println("Request sent: $(env.correlationId)")
|
||||
end
|
||||
|
||||
# Response handler
|
||||
function response_handler(msg::NATS.Msg)
|
||||
payloads = NATSBridge.smartreceive(
|
||||
msg,
|
||||
max_retries = 5,
|
||||
base_delay = 100,
|
||||
max_delay = 5000
|
||||
)
|
||||
|
||||
for (dataname, data, data_type) in payloads
|
||||
if data_type == "table"
|
||||
data = DataFrame(data)
|
||||
println("Query results:")
|
||||
show(data)
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
NATS.subscribe(REPLY_SUBJECT) do msg
|
||||
response_handler(msg)
|
||||
end
|
||||
```
|
||||
|
||||
## Step 9: Complete Chat Application
|
||||
|
||||
### Full Chat System
|
||||
|
||||
```julia
|
||||
module ChatApp
|
||||
using NATS
|
||||
using JSON
|
||||
using UUIDs
|
||||
using Dates
|
||||
using PrettyPrinting
|
||||
using DataFrames
|
||||
using Arrow
|
||||
using HTTP
|
||||
using Base64
|
||||
|
||||
# Include the bridge module
|
||||
include("../src/NATSBridge.jl")
|
||||
using .NATSBridge
|
||||
|
||||
# Configuration
|
||||
const NATS_URL = "nats://localhost:4222"
|
||||
const FILESERVER_URL = "http://localhost:8080"
|
||||
const SUBJECT = "/chat/room1"
|
||||
|
||||
# File upload handler for plik server
|
||||
function plik_upload_handler(fileserver_url::String, dataname::String, data::Vector{UInt8})::Dict{String, Any}
|
||||
url_getUploadID = "$fileserver_url/upload"
|
||||
headers = ["Content-Type" => "application/json"]
|
||||
body = """{ "OneShot" : true }"""
|
||||
httpResponse = HTTP.request("POST", url_getUploadID, headers, body; body_is_form=false)
|
||||
responseJson = JSON.parse(String(httpResponse.body))
|
||||
uploadid = responseJson["id"]
|
||||
uploadtoken = responseJson["uploadToken"]
|
||||
|
||||
file_multipart = HTTP.Multipart(dataname, IOBuffer(data), "application/octet-stream")
|
||||
url_upload = "$fileserver_url/file/$uploadid"
|
||||
headers = ["X-UploadToken" => uploadtoken]
|
||||
|
||||
form = HTTP.Form(Dict("file" => file_multipart))
|
||||
httpResponse = HTTP.post(url_upload, headers, form)
|
||||
responseJson = JSON.parse(String(httpResponse.body))
|
||||
|
||||
fileid = responseJson["id"]
|
||||
url = "$fileserver_url/file/$uploadid/$fileid/$dataname"
|
||||
|
||||
return Dict("status" => httpResponse.status, "uploadid" => uploadid, "fileid" => fileid, "url" => url)
|
||||
end
|
||||
|
||||
function send_chat_message(
|
||||
text::String,
|
||||
image_path::Union{String, Nothing}=nothing,
|
||||
audio_path::Union{String, Nothing}=nothing
|
||||
)
|
||||
# Build payloads list
|
||||
payloads = [("message_text", text, "text")]
|
||||
|
||||
if image_path !== nothing
|
||||
image_data = read(image_path, Vector{UInt8})
|
||||
push!(payloads, ("user_image", image_data, "image"))
|
||||
end
|
||||
|
||||
if audio_path !== nothing
|
||||
audio_data = read(audio_path, Vector{UInt8})
|
||||
push!(payloads, ("user_audio", audio_data, "audio"))
|
||||
end
|
||||
|
||||
env = NATSBridge.smartsend(
|
||||
SUBJECT,
|
||||
payloads,
|
||||
nats_url = NATS_URL,
|
||||
fileserver_url = FILESERVER_URL,
|
||||
fileserverUploadHandler = plik_upload_handler,
|
||||
size_threshold = 1_000_000,
|
||||
correlation_id = string(uuid4()),
|
||||
msg_purpose = "chat",
|
||||
sender_name = "user-1"
|
||||
)
|
||||
|
||||
println("Message sent with correlation ID: $(env.correlationId)")
|
||||
end
|
||||
|
||||
function receive_chat_messages()
|
||||
function message_handler(msg::NATS.Msg)
|
||||
payloads = NATSBridge.smartreceive(
|
||||
msg,
|
||||
max_retries = 5,
|
||||
base_delay = 100,
|
||||
max_delay = 5000
|
||||
)
|
||||
|
||||
println("\n--- New Message ---")
|
||||
for (dataname, data, data_type) in payloads
|
||||
if data_type == "text"
|
||||
println("Text: $data")
|
||||
elseif data_type == "image"
|
||||
filename = "received_$dataname.bin"
|
||||
write(filename, data)
|
||||
println("Image saved: $filename")
|
||||
elseif data_type == "audio"
|
||||
filename = "received_$dataname.bin"
|
||||
write(filename, data)
|
||||
println("Audio saved: $filename")
|
||||
elseif data_type == "table"
|
||||
println("Table received:")
|
||||
data = DataFrame(data)
|
||||
show(data)
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
NATS.subscribe(SUBJECT) do msg
|
||||
message_handler(msg)
|
||||
end
|
||||
println("Subscribed to: $SUBJECT")
|
||||
end
|
||||
|
||||
function run_interactive_chat()
|
||||
println("\n=== Interactive Chat ===")
|
||||
println("1. Send a message")
|
||||
println("2. Join a chat room")
|
||||
println("3. Exit")
|
||||
|
||||
while true
|
||||
print("\nSelect option (1-3): ")
|
||||
choice = readline()
|
||||
|
||||
if choice == "1"
|
||||
print("Enter message text: ")
|
||||
text = readline()
|
||||
send_chat_message(text)
|
||||
elseif choice == "2"
|
||||
receive_chat_messages()
|
||||
elseif choice == "3"
|
||||
break
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
end # module
|
||||
|
||||
# Run the chat app
|
||||
using .ChatApp
|
||||
ChatApp.run_interactive_chat()
|
||||
```
|
||||
|
||||
## Step 10: Testing the Chat System
|
||||
|
||||
### Test Scenario 1: Text-Only Chat
|
||||
|
||||
```bash
|
||||
# Terminal 1: Start the chat receiver
|
||||
julia test_julia_to_julia_text_receiver.jl
|
||||
|
||||
# Terminal 2: Send a message
|
||||
julia test_julia_to_julia_text_sender.jl
|
||||
```
|
||||
|
||||
### Test Scenario 2: Image Chat
|
||||
|
||||
```bash
|
||||
# Terminal 1: Receive messages
|
||||
julia test_julia_to_julia_mix_payloads_receiver.jl
|
||||
|
||||
# Terminal 2: Send image
|
||||
julia test_julia_to_julia_mix_payload_sender.jl
|
||||
```
|
||||
|
||||
### Test Scenario 3: Large File Transfer
|
||||
|
||||
```bash
|
||||
# Terminal 2: Send large file
|
||||
julia test_julia_to_julia_mix_payload_sender.jl
|
||||
```
|
||||
|
||||
## Conclusion
|
||||
|
||||
This walkthrough demonstrated how to build a chat system using NATSBridge.jl with support for:
|
||||
|
||||
- Text messages
|
||||
- Images (direct transport for small, link transport for large)
|
||||
- Audio files
|
||||
- Video files
|
||||
- Tabular data (DataFrames)
|
||||
- Bidirectional communication
|
||||
- Mixed-content messages
|
||||
|
||||
The key takeaways are:
|
||||
|
||||
1. **Always wrap payloads in a list** - Even for single payloads: `[("dataname", data, "type")]`
|
||||
2. **Use appropriate transport** - NATSBridge automatically handles size-based routing
|
||||
3. **Support mixed content** - Multiple payloads of different types in one message
|
||||
4. **Handle errors** - Implement proper error handling for network failures
|
||||
5. **Use correlation IDs** - Track messages across distributed systems
|
||||
|
||||
For more information, see:
|
||||
- [`docs/architecture.md`](./docs/architecture.md) - Detailed architecture documentation
|
||||
- [`docs/implementation.md`](./docs/implementation.md) - Implementation details
|
||||
Reference in New Issue
Block a user