Merged in feature/IO-2972-Final-Redis-Sockets-Fixes (pull request #1803)

Feature/IO-2972 Final Redis Sockets Fixes
This commit is contained in:
Dave Richer
2024-10-07 20:21:41 +00:00
15 changed files with 1537 additions and 368 deletions

25
.dockerignore Normal file
View File

@@ -0,0 +1,25 @@
# Directories to exclude
.circleci
.idea
.platform
.vscode
_reference
client
redis/dockerdata
hasura
node_modules
# Files to exclude
.ebignore
.editorconfig
.eslintrc.json
.gitignore
.prettierrc.js
Dockerfile
README.MD
bodyshop_translations.babel
docker-compose.yml
ecosystem.config.js
# Optional: Exclude logs and temporary files
*.log

44
Dockerfile Normal file
View File

@@ -0,0 +1,44 @@
# Use Amazon Linux 2023 as the base image
FROM amazonlinux:2023
# Install Git and Node.js (Amazon Linux 2023 uses the DNF package manager)
RUN dnf install -y git \
&& curl -sL https://rpm.nodesource.com/setup_20.x | bash - \
&& dnf install -y nodejs \
&& dnf clean all
# Install dependencies required by node-canvas
RUN dnf install -y \
gcc \
gcc-c++ \
cairo-devel \
pango-devel \
libjpeg-turbo-devel \
giflib-devel \
libpng-devel \
make \
python3 \
python3-pip \
&& dnf clean all
# Set the working directory
WORKDIR /app
# Copy package.json and package-lock.json
COPY package*.json ./
# Install Nodemon
RUN npm install -g nodemon
# Install dependencies
RUN npm install --omit=dev
# Copy the rest of your application code
COPY . .
# Expose the port your app runs on (adjust if necessary)
EXPOSE 4000
# Start the application
CMD ["nodemon", "--legacy-watch", "server.js"]

182
_reference/dockerreadme.md Normal file
View File

@@ -0,0 +1,182 @@
# Setting up External Networking and Static IP for WSL2 using Hyper-V
This guide will walk you through the steps to configure your WSL2 (Windows Subsystem for Linux) instance to use an external Hyper-V virtual switch, enabling it to connect directly to your local network. Additionally, you'll learn how to assign a static IP address to your WSL2 instance.
## Prerequisites
1. **Windows 10/11** with **WSL2** installed.
2. **Hyper-V** enabled on your system. If not, follow these steps to enable it:
- Open PowerShell as Administrator and run:
```powershell
dism.exe /Online /Enable-Feature /All /FeatureName:Microsoft-Hyper-V
```
- Restart your computer.
3. A basic understanding of networking and WSL2 configuration.
---
## Step 1: Create an External Hyper-V Switch
1. **Open Hyper-V Manager**:
- Press `Windows Key + X`, select `Hyper-V Manager`.
2. **Create a Virtual Switch**:
- In the right-hand pane, click `Virtual Switch Manager`.
- Choose `External` and click `Create Virtual Switch`.
- Select your external network adapter (this is usually your Ethernet or Wi-Fi adapter).
- Give the switch a name (e.g., `WSL External Switch`), then click `Apply` and `OK`.
---
## Step 2: Configure WSL2 to Use the External Hyper-V Switch
Now that you've created the external virtual switch, follow these steps to configure your WSL2 instance to use this switch.
1. **Set WSL2 to Use the External Switch**:
- By default, WSL2 uses NAT to connect to your local network. You need to configure WSL2 to use the external Hyper-V switch instead.
2. **Check WSL2 Networking**:
- Inside WSL, run:
```bash
ip a
```
- You should see an IP address in the range of your local network (e.g., `192.168.x.x`).
---
## Step 3: Configure a Static IP Address for WSL2
Once WSL2 is connected to the external network, you can assign a static IP address to your WSL2 instance.
1. **Open WSL2** and Edit the Network Configuration:
- Depending on your Linux distribution, the file paths may vary, but typically for Ubuntu-based systems:
```bash
sudo nano /etc/netplan/01-netcfg.yaml
```
- If this file doesnt exist, create a new file or use the correct configuration file path.
2. **Configure Static IP**:
- Add or update the following configuration:
```yaml
network:
version: 2
renderer: networkd
ethernets:
eth0:
dhcp4: no
addresses:
- 192.168.1.100/24 # Choose an IP address in your network range
gateway4: 192.168.1.1 # Your router's IP address
nameservers:
addresses:
- 8.8.8.8
- 8.8.4.4
```
- Adjust the values according to your local network settings:
- `addresses`: This is the static IP you want to assign.
- `gateway4`: This should be the IP address of your router.
- `nameservers`: These are DNS servers, you can use Google's public DNS or any other DNS provider.
3. **Apply the Changes**:
- Run the following command to apply the network configuration:
```bash
sudo netplan apply
```
4. **Verify the Static IP**:
- Check if the static IP is correctly set by running:
```bash
ip a
```
- You should see the static IP you configured (e.g., `192.168.1.100`) on the appropriate network interface (usually `eth0`).
---
## Step 4: Restart WSL2 to Apply Changes
To ensure the changes are fully applied, restart WSL2:
1. Open PowerShell or Command Prompt and run:
```powershell
wsl --shutdown
2. Then, start your WSL2 instance again.
## Step 5: Verify Connectivity
1. Check Internet and Local Network Connectivity:
- Run a ping command from within WSL to verify that it can reach the internet: ```ping 8.8.8.8```
2. Test Access from other Devices:
- If you're running services inside WSL (e.g., a web server), ensure they are accessible from other devices on your local network using the static IP address you configured (e.g., `http://192.168.1.100:4000`).
# Configuring `vm.overcommit_memory` in sysctl for WSL2
To prevent memory overcommitment issues and optimize performance, you can configure the `vm.overcommit_memory` setting in WSL2. This is particularly useful when running Redis or other memory-intensive services inside WSL2, as it helps control how the Linux kernel handles memory allocation.
### 1. **Open the sysctl Configuration File**:
To set the `vm.overcommit_memory` value, you'll need to edit the sysctl configuration file. Inside your WSL2 instance, run the following command to open the `sysctl.conf` file for editing:
```bash
sudo nano /etc/sysctl.conf
```
### 2. Add the Overcommit Memory Setting:
Add the following line at the end of the file to allow memory overcommitment:
```bash
vm.overcommit_memory = 1
```
This setting tells the Linux kernel to always allow memory allocation, regardless of how much memory is available, which can prevent out-of-memory errors when running certain applications.
### 3. Apply the Changes:
After editing the file, save it and then apply the new sysctl configuration by running:
```bash
sudo sysctl -p
```
# Install Docker and Docker Compose in WSL2
- https://docs.docker.com/desktop/wsl/
# Docker Commands
## General `docker-compose` Commands:
1. Bring up the services, force a rebuild of all services, and do not use the cache: `docker-compose up --build --no-cache`
2. Start Containers in Detached Mode: This will run the containers in the background (detached mode): `docker-compose up -d`
3. Stop and Remove Containers: Stops and removes the containers gracefully: `docker-compose down`
4. Stop containers without removing them: `docker-compose stop`
5. Remove Containers, Volumes, and Networks: `docker-compose down --volumes`
6. Force rebuild of containers: `docker-compose build --no-cache`
7. View running Containers: `docker-compose ps`
8. View a specific containers logs: `docker-compose logs <container-name>`
9. Scale services (multiple instances of a service): `docker-compose up --scale <container-name>=<instances number> -d`
## Volume Management Commands
1. List Docker volumes: `docker volume ls`
2. Remove Unused volumes `docker volume prune`
3. Remove specific volumes `docker volume rm <volume-name>`
4. Inspect a volume: `docker volume inspect <volume-name>`
## Container Image Management Commands:
1. List running containers: `docker ps`
2. List all containers: `docker os -a`
3. Remove Stopped containers: `docker container prune`
4. Remove a specific container: `docker container rm <container-name>`
5. Remove a specific image: `docker rmi <image-name>:<version>`
6. Remove all unused images: `docker image prune -a`
## Network Management Commands:
1. List networks: `docker network ls`
2. Inspect a specific network: `docker network inspect <network-name>`
3. Remove a specific network: `docker network rm <network-name>`
4. Remove unused networks: `docker network prune`
## Debugging and maintenance:
1. Enter a Running container: `docker exec -it <container name> /bin/bash` (could also be `/bin/sh` or for example `redis-cli` on a redis node)
2. View container resource usage: `docker stats`
3. Check Disk space used by Docker: `docker system df`
4. Remove all unused Data (Nuclear option): `docker system prune`
## Specific examples
1. To simulate a Clean state, one should run `docker system prune` followed by `docker volume prune -a`
2. You can run `docker-compose up` without the `-d` option, and you will get what is identical to the experience you were used to, this includes being able to control-c and bring the entire stack down

111
docker-compose.yml Normal file
View File

@@ -0,0 +1,111 @@
#############################
# Ports Exposed
# 4000 - Imex Node API
# 3333 - SocketIO Admin-UI
# 3334 - Redis-Insights
#############################
services:
redis-node-1:
build:
context: ./redis
container_name: redis-node-1
hostname: redis-node-1
restart: always
networks:
- redis-cluster-net
volumes:
- redis-node-1-data:/data
- redis-lock:/redis-lock
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 5s
retries: 10
redis-node-2:
build:
context: ./redis
container_name: redis-node-2
hostname: redis-node-2
restart: always
networks:
- redis-cluster-net
volumes:
- redis-node-2-data:/data
- redis-lock:/redis-lock
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 5s
retries: 10
redis-node-3:
build:
context: ./redis
container_name: redis-node-3
hostname: redis-node-3
restart: always
networks:
- redis-cluster-net
volumes:
- redis-node-3-data:/data
- redis-lock:/redis-lock
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 5s
retries: 10
node-app:
build:
context: .
container_name: node-app
hostname: imex-api
networks:
- redis-cluster-net
env_file:
- .env.development
depends_on:
redis-node-1:
condition: service_healthy
redis-node-2:
condition: service_healthy
redis-node-3:
condition: service_healthy
ports:
- "4000:4000"
volumes:
- .:/app
- /app/node_modules
socketio-admin-ui:
image: maitrungduc1410/socket.io-admin-ui
container_name: socketio-admin-ui
networks:
- redis-cluster-net
ports:
- "3333:80"
redis-insight:
image: redislabs/redisinsight:latest
container_name: redis-insight
hostname: redis-insight
restart: always
ports:
- "3334:5540"
networks:
- redis-cluster-net
volumes:
- redis-insight-data:/db
networks:
redis-cluster-net:
driver: bridge
volumes:
redis-node-1-data:
redis-node-2-data:
redis-node-3-data:
redis-lock:
redis-insight-data:

948
package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@@ -19,6 +19,7 @@
"makeitpretty": "prettier --write \"**/*.{css,js,json,jsx,scss}\""
},
"dependencies": {
"@aws-sdk/client-elasticache": "^3.665.0",
"@aws-sdk/client-secrets-manager": "^3.654.0",
"@aws-sdk/client-ses": "^3.654.0",
"@aws-sdk/credential-provider-node": "^3.654.0",
@@ -46,6 +47,7 @@
"graylog2": "^0.2.1",
"inline-css": "^4.0.2",
"intuit-oauth": "^4.1.2",
"ioredis": "^5.4.1",
"json-2-csv": "^5.5.5",
"lodash": "^4.17.21",
"moment": "^2.30.1",

20
redis/Dockerfile Normal file
View File

@@ -0,0 +1,20 @@
# Use the official Redis image
FROM redis:7.0-alpine
# Copy the Redis configuration file
COPY redis.conf /usr/local/etc/redis/redis.conf
# Copy the entrypoint script
COPY entrypoint.sh /usr/local/bin/entrypoint.sh
# Make the entrypoint script executable
RUN chmod +x /usr/local/bin/entrypoint.sh
# Debugging step: List contents of /usr/local/bin
RUN ls -l /usr/local/bin
# Expose Redis ports
EXPOSE 6379 16379
# Set the entrypoint
ENTRYPOINT ["entrypoint.sh"]

36
redis/entrypoint.sh Normal file
View File

@@ -0,0 +1,36 @@
#!/bin/sh
LOCKFILE="/redis-lock/redis-cluster-init.lock"
# Start Redis server in the background
redis-server /usr/local/etc/redis/redis.conf &
# Wait for Redis server to start
sleep 5
# Only initialize the cluster if the lock file doesn't exist
if [ ! -f "$LOCKFILE" ]; then
echo "Initializing Redis Cluster..."
# Create lock file to prevent further initialization attempts
touch "$LOCKFILE"
if [ $? -eq 0 ]; then
echo "Lock file created successfully at $LOCKFILE."
else
echo "Failed to create lock file."
fi
# Initialize the Redis cluster
yes yes | redis-cli --cluster create \
redis-node-1:6379 \
redis-node-2:6379 \
redis-node-3:6379 \
--cluster-replicas 0
echo "Redis Cluster initialized."
else
echo "Cluster already initialized, skipping initialization."
fi
# Keep the container running
tail -f /dev/null

6
redis/redis.conf Normal file
View File

@@ -0,0 +1,6 @@
bind 0.0.0.0
port 6379
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
appendonly yes

193
server.js
View File

@@ -1,26 +1,32 @@
const express = require("express");
const cors = require("cors");
const bodyParser = require("body-parser");
const path = require("path");
const http = require("http");
const Redis = require("ioredis");
const express = require("express");
const bodyParser = require("body-parser");
const compression = require("compression");
const cookieParser = require("cookie-parser");
const http = require("http");
const { Server } = require("socket.io");
const { createClient } = require("redis");
const { createAdapter } = require("@socket.io/redis-adapter");
const logger = require("./server/utils/logger");
const { redisSocketEvents } = require("./server/web-sockets/redisSocketEvents");
const { instrument } = require("@socket.io/admin-ui");
const { isString, isEmpty } = require("lodash");
const applyRedisHelpers = require("./server/utils/redisHelpers");
const applyIOHelpers = require("./server/utils/ioHelpers");
const logger = require("./server/utils/logger");
const { applyRedisHelpers } = require("./server/utils/redisHelpers");
const { applyIOHelpers } = require("./server/utils/ioHelpers");
const { redisSocketEvents } = require("./server/web-sockets/redisSocketEvents");
const { ElastiCacheClient, DescribeCacheClustersCommand } = require("@aws-sdk/client-elasticache");
const { default: InstanceManager } = require("./server/utils/instanceMgr");
// Load environment variables
require("dotenv").config({
path: path.resolve(process.cwd(), `.env.${process.env.NODE_ENV || "development"}`)
});
const CLUSTER_RETRY_BASE_DELAY = 100;
const CLUSTER_RETRY_MAX_DELAY = 5000;
const CLUSTER_RETRY_JITTER = 100;
/**
* CORS Origin for Socket.IO
* @type {string[][]}
@@ -49,16 +55,16 @@ const SOCKETIO_CORS_ORIGIN = [
"https://old.imex.online",
"https://www.old.imex.online",
"https://wsadmin.imex.online",
"https://www.wsadmin.imex.online",
"http://localhost:3333",
"https://localhost:3333"
"https://www.wsadmin.imex.online"
];
const SOCKETIO_CORS_ORIGIN_DEV = ["http://localhost:3333", "https://localhost:3333"];
/**
* Middleware for Express app
* @param app
*/
const applyMiddleware = (app) => {
const applyMiddleware = ({ app }) => {
app.use(compression());
app.use(cookieParser());
app.use(bodyParser.json({ limit: "50mb" }));
@@ -76,7 +82,7 @@ const applyMiddleware = (app) => {
* Route groupings for Express app
* @param app
*/
const applyRoutes = (app) => {
const applyRoutes = ({ app }) => {
app.use("/", require("./server/routes/miscellaneousRoutes"));
app.use("/notifications", require("./server/routes/notificationsRoutes"));
app.use("/render", require("./server/routes/renderRoutes"));
@@ -102,37 +108,140 @@ const applyRoutes = (app) => {
});
};
/**
* Fetch Redis nodes from AWS ElastiCache
* @returns {Promise<string[]>}
*/
const getRedisNodesFromAWS = async () => {
const client = new ElastiCacheClient({
region: InstanceManager({
imex: "ca-central-1",
rome: "us-east-2"
})
});
const params = {
ReplicationGroupId: process.env.REDIS_CLUSTER_ID,
ShowCacheNodeInfo: true
};
try {
// Fetch the cache clusters associated with the replication group
const command = new DescribeCacheClustersCommand(params);
const response = await client.send(command);
const cacheClusters = response.CacheClusters;
return cacheClusters.flatMap((cluster) =>
cluster.CacheNodes.map((node) => `${node.Endpoint.Address}:${node.Endpoint.Port}`)
);
} catch (err) {
logger.log(`Error fetching Redis nodes from AWS: ${err.message}`, "ERROR", "redis", "api");
throw err;
}
};
/**
* Connect to Redis Cluster
* @returns {Promise<unknown>}
*/
const connectToRedisCluster = async () => {
let redisServers;
if (isString(process.env?.REDIS_CLUSTER_ID) && !isEmpty(process.env?.REDIS_CLUSTER_ID)) {
// Fetch Redis nodes from AWS if AWS environment variables are present
redisServers = await getRedisNodesFromAWS();
} else {
// Use the Dockerized Redis cluster in development
if (isEmpty(process.env?.REDIS_URL) || !isString(process.env?.REDIS_URL)) {
logger.log(`[${process.env.NODE_ENV}] No or Malformed REDIS_URL present.`, "ERROR", "redis", "api");
process.exit(1);
}
try {
redisServers = JSON.parse(process.env.REDIS_URL);
} catch (error) {
logger.log(
`[${process.env.NODE_ENV}] Failed to parse REDIS_URL: ${error.message}. Exiting...`,
"ERROR",
"redis",
"api"
);
process.exit(1);
}
}
const clusterRetryStrategy = (times) => {
const delay =
Math.min(CLUSTER_RETRY_BASE_DELAY + times * 50, CLUSTER_RETRY_MAX_DELAY) + Math.random() * CLUSTER_RETRY_JITTER;
logger.log(
`[${process.env.NODE_ENV}] Redis cluster not yet ready. Retrying in ${delay.toFixed(2)}ms`,
"ERROR",
"redis",
"api"
);
return delay;
};
const redisCluster = new Redis.Cluster(redisServers, {
clusterRetryStrategy,
enableAutoPipelining: true,
enableReadyCheck: true,
redisOptions: {
// connectTimeout: 10000, // Timeout for connecting in ms
// idleTimeoutMillis: 30000, // Close idle connections after 30s
// maxRetriesPerRequest: 5 // Retry a maximum of 5 times per request
}
});
return new Promise((resolve, reject) => {
redisCluster.on("ready", () => {
logger.log(`[${process.env.NODE_ENV}] Redis cluster connection established.`, "INFO", "redis", "api");
resolve(redisCluster);
});
redisCluster.on("error", (err) => {
logger.log(`[${process.env.NODE_ENV}] Redis cluster connection failed: ${err.message}`, "ERROR", "redis", "api");
reject(err);
});
});
};
/**
* Apply Redis to the server
* @param server
* @param app
*/
const applySocketIO = async (server, app) => {
// Redis client setup for Pub/Sub and Key-Value Store
const pubClient = createClient({ url: process.env.REDIS_URL || "redis://localhost:6379" });
const applySocketIO = async ({ server, app }) => {
const redisCluster = await connectToRedisCluster();
// Handle errors
redisCluster.on("error", (err) => {
logger.log(`[${process.env.NODE_ENV}] Redis ERROR`, "ERROR", "redis", "api");
});
const pubClient = redisCluster;
const subClient = pubClient.duplicate();
pubClient.on("error", (err) => logger.log(`Redis pubClient error: ${err}`, "ERROR", "redis"));
subClient.on("error", (err) => logger.log(`Redis subClient error: ${err}`, "ERROR", "redis"));
try {
await Promise.all([pubClient.connect(), subClient.connect()]);
logger.log(`[${process.env.NODE_ENV}] Connected to Redis`, "INFO", "redis", "api");
} catch (redisError) {
logger.log("Failed to connect to Redis", "ERROR", "redis", redisError);
}
process.on("SIGINT", async () => {
logger.log("Closing Redis connections...", "INFO", "redis", "api");
await Promise.all([pubClient.disconnect(), subClient.disconnect()]);
process.exit(0);
try {
await Promise.all([pubClient.disconnect(), subClient.disconnect()]);
logger.log("Redis connections closed. Process will exit.", "INFO", "redis", "api");
} catch (error) {
logger.log(`Error closing Redis connections: ${error.message}`, "ERROR", "redis", "api");
}
});
const ioRedis = new Server(server, {
path: "/wss",
adapter: createAdapter(pubClient, subClient),
cors: {
origin: SOCKETIO_CORS_ORIGIN,
origin:
process.env?.NODE_ENV === "development"
? [...SOCKETIO_CORS_ORIGIN, ...SOCKETIO_CORS_ORIGIN_DEV]
: SOCKETIO_CORS_ORIGIN,
methods: ["GET", "POST"],
credentials: true,
exposedHeaders: ["set-cookie"]
@@ -147,6 +256,7 @@ const applySocketIO = async (server, app) => {
username: "admin",
password: process.env.REDIS_ADMIN_PASS
},
mode: process.env.REDIS_ADMIN_MODE || "development"
});
}
@@ -161,19 +271,22 @@ const applySocketIO = async (server, app) => {
}
});
const api = {
pubClient,
io,
ioRedis,
redisCluster
};
app.use((req, res, next) => {
Object.assign(req, {
pubClient,
io,
ioRedis
});
Object.assign(req, api);
next();
});
Object.assign(module.exports, { io, pubClient, ioRedis });
Object.assign(module.exports, api);
return { pubClient, io, ioRedis };
return api;
};
/**
@@ -186,16 +299,16 @@ const main = async () => {
const server = http.createServer(app);
const { pubClient, ioRedis } = await applySocketIO(server, app);
const api = applyRedisHelpers(pubClient, app, logger);
const ioHelpers = applyIOHelpers(app, api, ioRedis, logger);
const { pubClient, ioRedis } = await applySocketIO({ server, app });
const redisHelpers = applyRedisHelpers({ pubClient, app, logger });
const ioHelpers = applyIOHelpers({ app, redisHelpers, ioRedis, logger });
// Legacy Socket Events
require("./server/web-sockets/web-socket");
applyMiddleware(app);
applyRoutes(app);
redisSocketEvents(ioRedis, api, ioHelpers);
applyMiddleware({ app });
applyRoutes({ app });
redisSocketEvents({ io: ioRedis, redisHelpers, ioHelpers, logger });
try {
await server.listen(port);

View File

@@ -18,7 +18,7 @@ const ses = new aws.SES({
// The key apiVersion is no longer supported in v3, and can be removed.
// @deprecated The client uses the "latest" apiVersion.
apiVersion: "latest",
defaultProvider,
credentials: defaultProvider(),
region: InstanceManager({
imex: "ca-central-1",
rome: "us-east-2"
@@ -96,7 +96,7 @@ const sendServerEmail = async ({ subject, text }) => {
}
};
const sendProManagerWelcomeEmail = async ({to, subject, html}) => {
const sendProManagerWelcomeEmail = async ({ to, subject, html }) => {
try {
await transporter.sendMail({
from: `ProManager <noreply@promanager.web-est.com>`,

View File

@@ -15,7 +15,7 @@ const { taskEmailQueue } = require("./tasksEmailsQueue");
const ses = new aws.SES({
apiVersion: "latest",
defaultProvider,
credentials: defaultProvider(),
region: InstanceManager({
imex: "ca-central-1",
rome: "us-east-2"
@@ -45,22 +45,20 @@ if (process.env.NODE_ENV !== "development") {
// Handling SIGINT (e.g., Ctrl+C)
process.on("SIGINT", async () => {
await tasksEmailQueueCleanup();
process.exit(0);
});
// Handling SIGTERM (e.g., sent by system shutdown)
process.on("SIGTERM", async () => {
await tasksEmailQueueCleanup();
process.exit(0);
});
// Handling uncaught exceptions
process.on("uncaughtException", async (err) => {
await tasksEmailQueueCleanup();
process.exit(1); // Exit with an 'error' code
throw err;
});
// Handling unhandled promise rejections
process.on("unhandledRejection", async (reason, promise) => {
await tasksEmailQueueCleanup();
process.exit(1); // Exit with an 'error' code
throw reason;
});
}
@@ -247,7 +245,7 @@ const tasksRemindEmail = async (req, res) => {
const fromEmails = InstanceManager({
imex: "ImEX Online <noreply@imex.online>",
rome:
tasksRequest?.tasks[0].bodyshop.convenient_company === "promanager"
tasksRequest?.tasks[0].bodyshop.convenient_company === "promanager"
? "ProManager <noreply@promanager.web-est.com>"
: "Rome Online <noreply@romeonline.io>"
});
@@ -283,7 +281,7 @@ const tasksRemindEmail = async (req, res) => {
const endPoints = InstanceManager({
imex: process.env?.NODE_ENV === "test" ? "https://test.imex.online" : "https://imex.online",
rome:
tasksRequest?.tasks[0].bodyshop.convenient_company === "promanager"
tasksRequest?.tasks[0].bodyshop.convenient_company === "promanager"
? process.env?.NODE_ENV === "test"
? "https//test.promanager.web-est.com"
: "https://promanager.web-est.com"

View File

@@ -1,4 +1,4 @@
const applyIOHelpers = (app, api, io, logger) => {
const applyIOHelpers = ({ app, api, io, logger }) => {
const getBodyshopRoom = (bodyshopID) => `bodyshop-broadcast-room:${bodyshopID}`;
const ioHelpersAPI = {
@@ -14,4 +14,4 @@ const applyIOHelpers = (app, api, io, logger) => {
return ioHelpersAPI;
};
module.exports = applyIOHelpers;
module.exports = { applyIOHelpers };

View File

@@ -1,14 +1,14 @@
const logger = require("./logger");
/**
* Apply Redis helper functions
* @param pubClient
* @param app
* @param logger
*/
const applyRedisHelpers = (pubClient, app, logger) => {
const applyRedisHelpers = ({ pubClient, app, logger }) => {
// Store session data in Redis
const setSessionData = async (socketId, key, value) => {
try {
await pubClient.hSet(`socket:${socketId}`, key, JSON.stringify(value)); // Use Redis pubClient
await pubClient.hset(`socket:${socketId}`, key, JSON.stringify(value)); // Use Redis pubClient
} catch (error) {
logger.log(`Error Setting Session Data for socket ${socketId}: ${error}`, "ERROR", "redis");
}
@@ -17,7 +17,7 @@ const applyRedisHelpers = (pubClient, app, logger) => {
// Retrieve session data from Redis
const getSessionData = async (socketId, key) => {
try {
const data = await pubClient.hGet(`socket:${socketId}`, key);
const data = await pubClient.hget(`socket:${socketId}`, key);
return data ? JSON.parse(data) : null;
} catch (error) {
logger.log(`Error Getting Session Data for socket ${socketId}: ${error}`, "ERROR", "redis");
@@ -38,7 +38,7 @@ const applyRedisHelpers = (pubClient, app, logger) => {
try {
// keyValues is expected to be an object { key1: value1, key2: value2, ... }
const entries = Object.entries(keyValues).map(([key, value]) => [key, JSON.stringify(value)]);
await pubClient.hSet(`socket:${socketId}`, ...entries.flat());
await pubClient.hset(`socket:${socketId}`, ...entries.flat());
} catch (error) {
logger.log(`Error Setting Multiple Session Data for socket ${socketId}: ${error}`, "ERROR", "redis");
}
@@ -47,7 +47,7 @@ const applyRedisHelpers = (pubClient, app, logger) => {
// Retrieve multiple session data from Redis
const getMultipleSessionData = async (socketId, keys) => {
try {
const data = await pubClient.hmGet(`socket:${socketId}`, keys);
const data = await pubClient.hmget(`socket:${socketId}`, keys);
// Redis returns an object with null values for missing keys, so we parse the non-null ones
return Object.fromEntries(keys.map((key, index) => [key, data[index] ? JSON.parse(data[index]) : null]));
} catch (error) {
@@ -60,7 +60,7 @@ const applyRedisHelpers = (pubClient, app, logger) => {
// Use Redis multi/pipeline to batch the commands
const multi = pubClient.multi();
keyValueArray.forEach(([key, value]) => {
multi.hSet(`socket:${socketId}`, key, JSON.stringify(value));
multi.hset(`socket:${socketId}`, key, JSON.stringify(value));
});
await multi.exec(); // Execute all queued commands
} catch (error) {
@@ -71,7 +71,7 @@ const applyRedisHelpers = (pubClient, app, logger) => {
// Helper function to add an item to the end of the Redis list
const addItemToEndOfList = async (socketId, key, newItem) => {
try {
await pubClient.rPush(`socket:${socketId}:${key}`, JSON.stringify(newItem));
await pubClient.rpush(`socket:${socketId}:${key}`, JSON.stringify(newItem));
} catch (error) {
logger.log(`Error adding item to the end of the list for socket ${socketId}: ${error}`, "ERROR", "redis");
}
@@ -80,7 +80,7 @@ const applyRedisHelpers = (pubClient, app, logger) => {
// Helper function to add an item to the beginning of the Redis list
const addItemToBeginningOfList = async (socketId, key, newItem) => {
try {
await pubClient.lPush(`socket:${socketId}:${key}`, JSON.stringify(newItem));
await pubClient.lpush(`socket:${socketId}:${key}`, JSON.stringify(newItem));
} catch (error) {
logger.log(`Error adding item to the beginning of the list for socket ${socketId}: ${error}`, "ERROR", "redis");
}
@@ -98,7 +98,7 @@ const applyRedisHelpers = (pubClient, app, logger) => {
// Add methods to manage room users
const addUserToRoom = async (room, user) => {
try {
await pubClient.sAdd(room, JSON.stringify(user));
await pubClient.sadd(room, JSON.stringify(user));
} catch (error) {
logger.log(`Error adding user to room ${room}: ${error}`, "ERROR", "redis");
}
@@ -106,7 +106,7 @@ const applyRedisHelpers = (pubClient, app, logger) => {
const removeUserFromRoom = async (room, user) => {
try {
await pubClient.sRem(room, JSON.stringify(user));
await pubClient.srem(room, JSON.stringify(user));
} catch (error) {
logger.log(`Error removing user to room ${room}: ${error}`, "ERROR", "redis");
}
@@ -114,7 +114,7 @@ const applyRedisHelpers = (pubClient, app, logger) => {
const getUsersInRoom = async (room) => {
try {
const users = await pubClient.sMembers(room);
const users = await pubClient.smembers(room);
return users.map((user) => JSON.parse(user));
} catch (error) {
logger.log(`Error getting users in room ${room}: ${error}`, "ERROR", "redis");
@@ -143,70 +143,87 @@ const applyRedisHelpers = (pubClient, app, logger) => {
next();
});
// // Demo to show how all the helper functions work
// Demo to show how all the helper functions work
// const demoSessionData = async () => {
// const socketId = "testSocketId";
//
// // Store session data using setSessionData
// await exports.setSessionData(socketId, "field1", "Hello, Redis!");
//
// // Retrieve session data using getSessionData
// const field1Value = await exports.getSessionData(socketId, "field1");
// // 1. Test setSessionData and getSessionData
// await setSessionData(socketId, "field1", "Hello, Redis!");
// const field1Value = await getSessionData(socketId, "field1");
// console.log("Retrieved single field value:", field1Value);
//
// // Store multiple session data using setMultipleSessionData
// await exports.setMultipleSessionData(socketId, { field2: "Second Value", field3: "Third Value" });
//
// // Retrieve multiple session data using getMultipleSessionData
// const multipleFields = await exports.getMultipleSessionData(socketId, ["field2", "field3"]);
// // 2. Test setMultipleSessionData and getMultipleSessionData
// await setMultipleSessionData(socketId, { field2: "Second Value", field3: "Third Value" });
// const multipleFields = await getMultipleSessionData(socketId, ["field2", "field3"]);
// console.log("Retrieved multiple field values:", multipleFields);
//
// // Store multiple session data using setMultipleFromArraySessionData
// await exports.setMultipleFromArraySessionData(socketId, [
// // 3. Test setMultipleFromArraySessionData
// await setMultipleFromArraySessionData(socketId, [
// ["field4", "Fourth Value"],
// ["field5", "Fifth Value"]
// ]);
//
// // Retrieve and log all fields
// const allFields = await exports.getMultipleSessionData(socketId, [
// "field1",
// "field2",
// "field3",
// "field4",
// "field5"
// ]);
// const allFields = await getMultipleSessionData(socketId, ["field1", "field2", "field3", "field4", "field5"]);
// console.log("Retrieved all field values:", allFields);
//
// // 4. Test list functions
// // Add item to the end of a Redis list
// await exports.addItemToEndOfList(socketId, "logEvents", { event: "Log Event 1", timestamp: new Date() });
// await exports.addItemToEndOfList(socketId, "logEvents", { event: "Log Event 2", timestamp: new Date() });
// await addItemToEndOfList(socketId, "logEvents", { event: "Log Event 1", timestamp: new Date() });
// await addItemToEndOfList(socketId, "logEvents", { event: "Log Event 2", timestamp: new Date() });
//
// // Add item to the beginning of a Redis list
// await exports.addItemToBeginningOfList(socketId, "logEvents", { event: "First Log Event", timestamp: new Date() });
// await addItemToBeginningOfList(socketId, "logEvents", { event: "First Log Event", timestamp: new Date() });
//
// // Retrieve the entire list (using lRange)
// const logEvents = await pubClient.lRange(`socket:${socketId}:logEvents`, 0, -1);
// console.log("Log Events List:", logEvents.map(JSON.parse));
// // Retrieve the entire list
// const logEventsData = await pubClient.lrange(`socket:${socketId}:logEvents`, 0, -1);
// const logEvents = logEventsData.map((item) => JSON.parse(item));
// console.log("Log Events List:", logEvents);
//
// // **Add the new code below to test clearList**
//
// // Clear the list using clearList
// await exports.clearList(socketId, "logEvents");
// // 5. Test clearList
// await clearList(socketId, "logEvents");
// console.log("Log Events List cleared.");
//
// // Retrieve the list after clearing to confirm it's empty
// const logEventsAfterClear = await pubClient.lRange(`socket:${socketId}:logEvents`, 0, -1);
// const logEventsAfterClear = await pubClient.lrange(`socket:${socketId}:logEvents`, 0, -1);
// console.log("Log Events List after clearing:", logEventsAfterClear); // Should be an empty array
//
// // Clear session data
// await exports.clearSessionData(socketId);
// // 6. Test clearSessionData
// await clearSessionData(socketId);
// console.log("Session data cleared.");
// };
//
// // 7. Test room functions
// const roomName = "testRoom";
// const user1 = { id: 1, name: "Alice" };
// const user2 = { id: 2, name: "Bob" };
//
// // Add users to room
// await addUserToRoom(roomName, user1);
// await addUserToRoom(roomName, user2);
//
// // Get users in room
// const usersInRoom = await getUsersInRoom(roomName);
// console.log(`Users in room ${roomName}:`, usersInRoom);
//
// // Remove a user from room
// await removeUserFromRoom(roomName, user1);
//
// // Get users in room after removal
// const usersInRoomAfterRemoval = await getUsersInRoom(roomName);
// console.log(`Users in room ${roomName} after removal:`, usersInRoomAfterRemoval);
//
// // Clean up: remove remaining users from room
// await removeUserFromRoom(roomName, user2);
//
// // Verify room is empty
// const usersInRoomAfterCleanup = await getUsersInRoom(roomName);
// console.log(`Users in room ${roomName} after cleanup:`, usersInRoomAfterCleanup); // Should be empty
// };
// if (process.env.NODE_ENV === "development") {
// demoSessionData();
// }
// "th1s1sr3d1s" (BCrypt)
return api;
};
module.exports = applyRedisHelpers;
module.exports = { applyRedisHelpers };

View File

@@ -1,102 +1,18 @@
const path = require("path");
require("dotenv").config({
path: path.resolve(process.cwd(), `.env.${process.env.NODE_ENV || "development"}`)
});
const logger = require("../utils/logger");
const { admin } = require("../firebase/firebase-handler");
// Logging helper functions
function createLogEvent(socket, level, message) {
console.log(`[IOREDIS LOG EVENT] - ${socket.user.email} - ${socket.id} - ${message}`);
logger.log("ioredis-log-event", level, socket.user.email, null, { wsmessage: message });
}
const registerUpdateEvents = (socket) => {
socket.on("update-token", async (newToken) => {
try {
socket.user = await admin.auth().verifyIdToken(newToken);
createLogEvent(socket, "INFO", "Token updated successfully");
socket.emit("token-updated", { success: true });
} catch (error) {
createLogEvent(socket, "ERROR", `Token update failed: ${error.message}`);
socket.emit("token-updated", { success: false, error: error.message });
// Optionally disconnect the socket if token verification fails
socket.disconnect();
}
});
};
const redisSocketEvents = (io, { addUserToRoom, getUsersInRoom, removeUserFromRoom }, { getBodyshopRoom }) => {
// Room management and broadcasting events
function registerRoomAndBroadcastEvents(socket) {
socket.on("join-bodyshop-room", async (bodyshopUUID) => {
try {
const room = getBodyshopRoom(bodyshopUUID);
socket.join(room);
await addUserToRoom(room, { uid: socket.user.uid, email: socket.user.email, socket: socket.id });
createLogEvent(socket, "DEBUG", `Client joined bodyshop room: ${room}`);
// Notify all users in the room about the updated user list
const usersInRoom = await getUsersInRoom(room);
io.to(room).emit("room-users-updated", usersInRoom);
} catch (error) {
createLogEvent(socket, "ERROR", `Error joining room: ${error}`);
}
});
socket.on("leave-bodyshop-room", async (bodyshopUUID) => {
try {
const room = getBodyshopRoom(bodyshopUUID);
socket.leave(room);
createLogEvent(socket, "DEBUG", `Client left bodyshop room: ${room}`);
} catch (error) {
createLogEvent(socket, "ERROR", `Error joining room: ${error}`);
}
});
socket.on("get-room-users", async (bodyshopUUID, callback) => {
try {
const usersInRoom = await getUsersInRoom(getBodyshopRoom(bodyshopUUID));
callback(usersInRoom);
} catch (error) {
createLogEvent(socket, "ERROR", `Error getting room: ${error}`);
}
});
socket.on("broadcast-to-bodyshop", async (bodyshopUUID, message) => {
try {
const room = getBodyshopRoom(bodyshopUUID);
io.to(room).emit("bodyshop-message", message);
createLogEvent(socket, "INFO", `Broadcast message to bodyshop ${room}`);
} catch (error) {
createLogEvent(socket, "ERROR", `Error getting room: ${error}`);
}
});
socket.on("disconnect", async () => {
try {
createLogEvent(socket, "DEBUG", `User disconnected.`);
// Get all rooms the socket is part of
const rooms = Array.from(socket.rooms).filter((room) => room !== socket.id);
for (const room of rooms) {
await removeUserFromRoom(room, { uid: socket.user.uid, email: socket.user.email, socket: socket.id });
}
} catch (error) {
createLogEvent(socket, "ERROR", `Error getting room: ${error}`);
}
});
}
// Register all socket events for a given socket connection
function registerSocketEvents(socket) {
createLogEvent(socket, "DEBUG", `Registering RedisIO Socket Events.`);
// Register room and broadcasting events
registerRoomAndBroadcastEvents(socket);
registerUpdateEvents(socket);
}
const redisSocketEvents = ({
io,
redisHelpers: { setSessionData, clearSessionData }, // Note: Used if we persist user to Redis
ioHelpers: { getBodyshopRoom },
logger
}) => {
// Logging helper functions
const createLogEvent = (socket, level, message) => {
console.log(`[IOREDIS LOG EVENT] - ${socket?.user?.email} - ${socket.id} - ${message}`);
logger.log("ioredis-log-event", level, socket?.user?.email, null, { wsmessage: message });
};
// Socket Auth Middleware
const authMiddleware = (socket, next) => {
try {
if (socket.handshake.auth.token) {
@@ -105,6 +21,9 @@ const redisSocketEvents = (io, { addUserToRoom, getUsersInRoom, removeUserFromRo
.verifyIdToken(socket.handshake.auth.token)
.then((user) => {
socket.user = user;
// Note: if we ever want to capture user data across sockets
// Uncomment the following line and then remove the next() to a second then()
// return setSessionData(socket.id, "user", user);
next();
})
.catch((error) => {
@@ -122,7 +41,99 @@ const redisSocketEvents = (io, { addUserToRoom, getUsersInRoom, removeUserFromRo
}
};
// Socket.IO Middleware and Connection
// Register Socket Events
const registerSocketEvents = (socket) => {
createLogEvent(socket, "DEBUG", `Registering RedisIO Socket Events.`);
// Token Update Events
const registerUpdateEvents = (socket) => {
const updateToken = async (newToken) => {
try {
// noinspection UnnecessaryLocalVariableJS
const user = await admin.auth().verifyIdToken(newToken, true);
socket.user = user;
// If We ever want to persist user Data across workers
// await setSessionData(socket.id, "user", user);
createLogEvent(socket, "INFO", "Token updated successfully");
socket.emit("token-updated", { success: true });
} catch (error) {
if (error.code === "auth/id-token-expired") {
createLogEvent(socket, "WARNING", "Stale token received, waiting for new token");
socket.emit("token-updated", {
success: false,
error: "Stale token."
});
} else {
createLogEvent(socket, "ERROR", `Token update failed: ${error.message}`);
socket.emit("token-updated", { success: false, error: error.message });
// For any other errors, optionally disconnect the socket
socket.disconnect();
}
}
};
socket.on("update-token", updateToken);
};
// Room Broadcast Events
const registerRoomAndBroadcastEvents = (socket) => {
const joinBodyshopRoom = (bodyshopUUID) => {
try {
const room = getBodyshopRoom(bodyshopUUID);
socket.join(room);
createLogEvent(socket, "DEBUG", `Client joined bodyshop room: ${room}`);
} catch (error) {
createLogEvent(socket, "ERROR", `Error joining room: ${error}`);
}
};
const leaveBodyshopRoom = (bodyshopUUID) => {
try {
const room = getBodyshopRoom(bodyshopUUID);
socket.leave(room);
createLogEvent(socket, "DEBUG", `Client left bodyshop room: ${room}`);
} catch (error) {
createLogEvent(socket, "ERROR", `Error joining room: ${error}`);
}
};
const broadcastToBodyshopRoom = (bodyshopUUID, message) => {
try {
const room = getBodyshopRoom(bodyshopUUID);
io.to(room).emit("bodyshop-message", message);
createLogEvent(socket, "DEBUG", `Broadcast message to bodyshop ${room}`);
} catch (error) {
createLogEvent(socket, "ERROR", `Error getting room: ${error}`);
}
};
socket.on("join-bodyshop-room", joinBodyshopRoom);
socket.on("leave-bodyshop-room", leaveBodyshopRoom);
socket.on("broadcast-to-bodyshop", broadcastToBodyshopRoom);
};
// Disconnect Events
const registerDisconnectEvents = (socket) => {
const disconnect = () => {
createLogEvent(socket, "DEBUG", `User disconnected.`);
const rooms = Array.from(socket.rooms).filter((room) => room !== socket.id);
for (const room of rooms) {
socket.leave(room);
}
// If we ever want to persist the user across workers
// clearSessionData(socket.id);
};
socket.on("disconnect", disconnect);
};
// Call Handlers
registerRoomAndBroadcastEvents(socket);
registerUpdateEvents(socket);
registerDisconnectEvents(socket);
};
// Associate Middleware and Handlers
io.use(authMiddleware);
io.on("connection", registerSocketEvents);
};