Merged in release/2024-10-11 (pull request #1811)
Release/2024-10-11 into master-AIO - IO-2791, IO-2962, IO-2971, IO-2972, IO-2979
This commit is contained in:
24
.dockerignore
Normal file
24
.dockerignore
Normal file
@@ -0,0 +1,24 @@
|
||||
# Directories to exclude
|
||||
.circleci
|
||||
.idea
|
||||
.platform
|
||||
.vscode
|
||||
_reference
|
||||
client
|
||||
redis/dockerdata
|
||||
hasura
|
||||
node_modules
|
||||
# Files to exclude
|
||||
.ebignore
|
||||
.editorconfig
|
||||
.eslintrc.json
|
||||
.gitignore
|
||||
.prettierrc.js
|
||||
Dockerfile
|
||||
README.MD
|
||||
bodyshop_translations.babel
|
||||
docker-compose.yml
|
||||
ecosystem.config.js
|
||||
|
||||
# Optional: Exclude logs and temporary files
|
||||
*.log
|
||||
47
Dockerfile
Normal file
47
Dockerfile
Normal file
@@ -0,0 +1,47 @@
|
||||
# Use Amazon Linux 2023 as the base image
|
||||
FROM amazonlinux:2023
|
||||
|
||||
# Install Git and Node.js (Amazon Linux 2023 uses the DNF package manager)
|
||||
RUN dnf install -y git \
|
||||
&& curl -sL https://rpm.nodesource.com/setup_20.x | bash - \
|
||||
&& dnf install -y nodejs \
|
||||
&& dnf clean all
|
||||
|
||||
|
||||
# Install dependencies required by node-canvas
|
||||
RUN dnf install -y \
|
||||
gcc \
|
||||
gcc-c++ \
|
||||
cairo-devel \
|
||||
pango-devel \
|
||||
libjpeg-turbo-devel \
|
||||
giflib-devel \
|
||||
libpng-devel \
|
||||
make \
|
||||
python3 \
|
||||
python3-pip \
|
||||
&& dnf clean all
|
||||
|
||||
# Set the working directory
|
||||
WORKDIR /app
|
||||
|
||||
# This is because our test route uses a git commit hash
|
||||
RUN git config --global --add safe.directory /app
|
||||
|
||||
# Copy package.json and package-lock.json
|
||||
COPY package.json ./
|
||||
|
||||
# Install Nodemon
|
||||
RUN npm install -g nodemon
|
||||
|
||||
# Install dependencies
|
||||
RUN npm i --no-package-lock
|
||||
|
||||
# Copy the rest of your application code
|
||||
COPY . .
|
||||
|
||||
# Expose the port your app runs on (adjust if necessary)
|
||||
EXPOSE 4000
|
||||
|
||||
# Start the application
|
||||
CMD ["nodemon", "--legacy-watch", "server.js"]
|
||||
187
_reference/dockerreadme.md
Normal file
187
_reference/dockerreadme.md
Normal file
@@ -0,0 +1,187 @@
|
||||
# Setting up External Networking and Static IP for WSL2 using Hyper-V
|
||||
|
||||
This guide will walk you through the steps to configure your WSL2 (Windows Subsystem for Linux) instance to use an external Hyper-V virtual switch, enabling it to connect directly to your local network. Additionally, you'll learn how to assign a static IP address to your WSL2 instance.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
1. **Windows 10/11** with **WSL2** installed.
|
||||
2. **Hyper-V** enabled on your system. If not, follow these steps to enable it:
|
||||
- Open PowerShell as Administrator and run:
|
||||
```powershell
|
||||
dism.exe /Online /Enable-Feature /All /FeatureName:Microsoft-Hyper-V
|
||||
```
|
||||
- Restart your computer.
|
||||
|
||||
3. A basic understanding of networking and WSL2 configuration.
|
||||
|
||||
---
|
||||
|
||||
## Step 1: Create an External Hyper-V Switch
|
||||
|
||||
1. **Open Hyper-V Manager**:
|
||||
- Press `Windows Key + X`, select `Hyper-V Manager`.
|
||||
|
||||
2. **Create a Virtual Switch**:
|
||||
- In the right-hand pane, click `Virtual Switch Manager`.
|
||||
- Choose `External` and click `Create Virtual Switch`.
|
||||
- Select your external network adapter (this is usually your Ethernet or Wi-Fi adapter).
|
||||
- Give the switch a name (e.g., `WSL External Switch`), then click `Apply` and `OK`.
|
||||
|
||||
---
|
||||
|
||||
## Step 2: Configure WSL2 to Use the External Hyper-V Switch
|
||||
|
||||
Now that you've created the external virtual switch, follow these steps to configure your WSL2 instance to use this switch.
|
||||
|
||||
1. **Set WSL2 to Use the External Switch**:
|
||||
- By default, WSL2 uses NAT to connect to your local network. You need to configure WSL2 to use the external Hyper-V switch instead.
|
||||
|
||||
2. **Check WSL2 Networking**:
|
||||
- Inside WSL, run:
|
||||
```bash
|
||||
ip a
|
||||
```
|
||||
- You should see an IP address in the range of your local network (e.g., `192.168.x.x`).
|
||||
|
||||
---
|
||||
|
||||
## Step 3: Configure a Static IP Address for WSL2
|
||||
|
||||
Once WSL2 is connected to the external network, you can assign a static IP address to your WSL2 instance.
|
||||
|
||||
1. **Open WSL2** and Edit the Network Configuration:
|
||||
- Depending on your Linux distribution, the file paths may vary, but typically for Ubuntu-based systems:
|
||||
```bash
|
||||
sudo nano /etc/netplan/01-netcfg.yaml
|
||||
```
|
||||
- If this file doesn’t exist, create a new file or use the correct configuration file path.
|
||||
|
||||
2. **Configure Static IP**:
|
||||
- Add or update the following configuration:
|
||||
```yaml
|
||||
network:
|
||||
version: 2
|
||||
renderer: networkd
|
||||
ethernets:
|
||||
eth0:
|
||||
dhcp4: no
|
||||
addresses:
|
||||
- 192.168.1.100/24 # Choose an IP address in your network range
|
||||
gateway4: 192.168.1.1 # Your router's IP address
|
||||
nameservers:
|
||||
addresses:
|
||||
- 8.8.8.8
|
||||
- 8.8.4.4
|
||||
```
|
||||
- Adjust the values according to your local network settings:
|
||||
- `addresses`: This is the static IP you want to assign.
|
||||
- `gateway4`: This should be the IP address of your router.
|
||||
- `nameservers`: These are DNS servers, you can use Google's public DNS or any other DNS provider.
|
||||
|
||||
3. **Apply the Changes**:
|
||||
- Run the following command to apply the network configuration:
|
||||
```bash
|
||||
sudo netplan apply
|
||||
```
|
||||
|
||||
4. **Verify the Static IP**:
|
||||
- Check if the static IP is correctly set by running:
|
||||
```bash
|
||||
ip a
|
||||
```
|
||||
- You should see the static IP you configured (e.g., `192.168.1.100`) on the appropriate network interface (usually `eth0`).
|
||||
|
||||
---
|
||||
|
||||
## Step 4: Restart WSL2 to Apply Changes
|
||||
|
||||
To ensure the changes are fully applied, restart WSL2:
|
||||
|
||||
1. Open PowerShell or Command Prompt and run:
|
||||
```powershell
|
||||
wsl --shutdown
|
||||
2. Then, start your WSL2 instance again.
|
||||
|
||||
## Step 5: Verify Connectivity
|
||||
|
||||
1. Check Internet and Local Network Connectivity:
|
||||
- Run a ping command from within WSL to verify that it can reach the internet: ```ping 8.8.8.8```
|
||||
2. Test Access from other Devices:
|
||||
- If you're running services inside WSL (e.g., a web server), ensure they are accessible from other devices on your local network using the static IP address you configured (e.g., `http://192.168.1.100:4000`).
|
||||
|
||||
|
||||
|
||||
# Configuring `vm.overcommit_memory` in sysctl for WSL2
|
||||
|
||||
To prevent memory overcommitment issues and optimize performance, you can configure the `vm.overcommit_memory` setting in WSL2. This is particularly useful when running Redis or other memory-intensive services inside WSL2, as it helps control how the Linux kernel handles memory allocation.
|
||||
|
||||
### 1. **Open the sysctl Configuration File**:
|
||||
To set the `vm.overcommit_memory` value, you'll need to edit the sysctl configuration file. Inside your WSL2 instance, run the following command to open the `sysctl.conf` file for editing:
|
||||
```bash
|
||||
sudo nano /etc/sysctl.conf
|
||||
```
|
||||
### 2. Add the Overcommit Memory Setting:
|
||||
Add the following line at the end of the file to allow memory overcommitment:
|
||||
```bash
|
||||
vm.overcommit_memory = 1
|
||||
```
|
||||
|
||||
This setting tells the Linux kernel to always allow memory allocation, regardless of how much memory is available, which can prevent out-of-memory errors when running certain applications.
|
||||
|
||||
### 3. Apply the Changes:
|
||||
After editing the file, save it and then apply the new sysctl configuration by running:
|
||||
|
||||
```bash
|
||||
sudo sysctl -p
|
||||
```
|
||||
|
||||
# Install Docker and Docker Compose in WSL2
|
||||
- https://docs.docker.com/desktop/wsl/
|
||||
|
||||
# Local Stack
|
||||
- LocalStack Front end (Optional) - https://apps.microsoft.com/detail/9ntrnft9zws2?hl=en-us&gl=US
|
||||
- http://localhost:4566/_aws/ses will allow you to see emails sent
|
||||
|
||||
# Docker Commands
|
||||
|
||||
## General `docker-compose` Commands:
|
||||
1. Bring up the services, force a rebuild of all services, and do not use the cache: `docker-compose up --build --no-cache`
|
||||
2. Start Containers in Detached Mode: This will run the containers in the background (detached mode): `docker-compose up -d`
|
||||
3. Stop and Remove Containers: Stops and removes the containers gracefully: `docker-compose down`
|
||||
4. Stop containers without removing them: `docker-compose stop`
|
||||
5. Remove Containers, Volumes, and Networks: `docker-compose down --volumes`
|
||||
6. Force rebuild of containers: `docker-compose build --no-cache`
|
||||
7. View running Containers: `docker-compose ps`
|
||||
8. View a specific containers logs: `docker-compose logs <container-name>`
|
||||
9. Scale services (multiple instances of a service): `docker-compose up --scale <container-name>=<instances number> -d`
|
||||
10. Watch a specific containers logs in realtime with timestamps: `docker-compose logs -f --timestamps <container-name>`
|
||||
|
||||
## Volume Management Commands
|
||||
1. List Docker volumes: `docker volume ls`
|
||||
2. Remove Unused volumes `docker volume prune`
|
||||
3. Remove specific volumes `docker volume rm <volume-name>`
|
||||
4. Inspect a volume: `docker volume inspect <volume-name>`
|
||||
|
||||
## Container Image Management Commands:
|
||||
1. List running containers: `docker ps`
|
||||
2. List all containers: `docker os -a`
|
||||
3. Remove Stopped containers: `docker container prune`
|
||||
4. Remove a specific container: `docker container rm <container-name>`
|
||||
5. Remove a specific image: `docker rmi <image-name>:<version>`
|
||||
6. Remove all unused images: `docker image prune -a`
|
||||
|
||||
## Network Management Commands:
|
||||
1. List networks: `docker network ls`
|
||||
2. Inspect a specific network: `docker network inspect <network-name>`
|
||||
3. Remove a specific network: `docker network rm <network-name>`
|
||||
4. Remove unused networks: `docker network prune`
|
||||
|
||||
## Debugging and maintenance:
|
||||
1. Enter a Running container: `docker exec -it <container name> /bin/bash` (could also be `/bin/sh` or for example `redis-cli` on a redis node)
|
||||
2. View container resource usage: `docker stats`
|
||||
3. Check Disk space used by Docker: `docker system df`
|
||||
4. Remove all unused Data (Nuclear option): `docker system prune`
|
||||
|
||||
## Specific examples
|
||||
1. To simulate a Clean state, one should run `docker system prune` followed by `docker volume prune -a`
|
||||
2. You can run `docker-compose up` without the `-d` option, and you will get what is identical to the experience you were used to, this includes being able to control-c and bring the entire stack down
|
||||
@@ -219,7 +219,7 @@ export function JobsExportAllButton({
|
||||
};
|
||||
|
||||
return (
|
||||
<Button onClick={handleQbxml} loading={loading} disabled={disabled}>
|
||||
<Button onClick={handleQbxml} loading={loading} disabled={disabled || jobIds?.length > 10}>
|
||||
{t("jobs.actions.exportselected")}
|
||||
</Button>
|
||||
);
|
||||
|
||||
@@ -200,7 +200,7 @@ export function PayableExportAll({
|
||||
);
|
||||
|
||||
return (
|
||||
<Button onClick={handleQbxml} loading={loading} disabled={disabled}>
|
||||
<Button onClick={handleQbxml} loading={loading} disabled={disabled || billids?.length > 10}>
|
||||
{t("jobs.actions.exportselected")}
|
||||
</Button>
|
||||
);
|
||||
|
||||
@@ -180,7 +180,7 @@ export function PaymentsExportAllButton({
|
||||
};
|
||||
|
||||
return (
|
||||
<Button onClick={handleQbxml} loading={loading} disabled={disabled}>
|
||||
<Button onClick={handleQbxml} loading={loading} disabled={disabled || paymentIds?.length > 10}>
|
||||
{t("jobs.actions.exportselected")}
|
||||
</Button>
|
||||
);
|
||||
|
||||
@@ -314,8 +314,8 @@ export function* SetAuthLevelFromShopDetails({ payload }) {
|
||||
try {
|
||||
const userEmail = yield select((state) => state.user.currentUser.email);
|
||||
try {
|
||||
//console.log("Setting shop timezone.");
|
||||
// dayjs.tz.setDefault(payload.timezone);
|
||||
console.log("Setting shop timezone.");
|
||||
day.tz.setDefault(payload.timezone);
|
||||
} catch (error) {
|
||||
console.log(error);
|
||||
}
|
||||
|
||||
178
docker-compose.yml
Normal file
178
docker-compose.yml
Normal file
@@ -0,0 +1,178 @@
|
||||
#############################
|
||||
# Ports Exposed
|
||||
# 4000 - Imex Node API
|
||||
# 4556 - LocalStack (Local AWS)
|
||||
# 3333 - SocketIO Admin-UI (Optional)
|
||||
# 3334 - Redis-Insights (Optional)
|
||||
#############################
|
||||
|
||||
services:
|
||||
|
||||
# Redis Node 1
|
||||
redis-node-1:
|
||||
build:
|
||||
context: ./redis
|
||||
container_name: redis-node-1
|
||||
hostname: redis-node-1
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
- redis-cluster-net
|
||||
volumes:
|
||||
- redis-node-1-data:/data
|
||||
- redis-lock:/redis-lock
|
||||
healthcheck:
|
||||
test: ["CMD", "redis-cli", "ping"]
|
||||
interval: 10s
|
||||
timeout: 5s
|
||||
retries: 10
|
||||
|
||||
# Redis Node 2
|
||||
redis-node-2:
|
||||
build:
|
||||
context: ./redis
|
||||
container_name: redis-node-2
|
||||
hostname: redis-node-2
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
- redis-cluster-net
|
||||
volumes:
|
||||
- redis-node-2-data:/data
|
||||
- redis-lock:/redis-lock
|
||||
healthcheck:
|
||||
test: ["CMD", "redis-cli", "ping"]
|
||||
interval: 10s
|
||||
timeout: 5s
|
||||
retries: 10
|
||||
|
||||
# Redis Node 3
|
||||
redis-node-3:
|
||||
build:
|
||||
context: ./redis
|
||||
container_name: redis-node-3
|
||||
hostname: redis-node-3
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
- redis-cluster-net
|
||||
volumes:
|
||||
- redis-node-3-data:/data
|
||||
- redis-lock:/redis-lock
|
||||
healthcheck:
|
||||
test: ["CMD", "redis-cli", "ping"]
|
||||
interval: 10s
|
||||
timeout: 5s
|
||||
retries: 10
|
||||
|
||||
# LocalStack: Used to emulate AWS services locally, currently setup for SES
|
||||
# Notes: Set the ENV Debug to 1 for additional logging
|
||||
localstack:
|
||||
image: localstack/localstack
|
||||
container_name: localstack
|
||||
hostname: localstack
|
||||
networks:
|
||||
- redis-cluster-net
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
- /var/run/docker.sock:/var/run/docker.sock
|
||||
environment:
|
||||
- SERVICES=ses
|
||||
- DEBUG=0
|
||||
- AWS_ACCESS_KEY_ID=test
|
||||
- AWS_SECRET_ACCESS_KEY=test
|
||||
- AWS_DEFAULT_REGION=ca-central-1
|
||||
- EXTRA_CORS_ALLOWED_HEADERS=Authorization,Content-Type
|
||||
- EXTRA_CORS_ALLOWED_ORIGINS=*
|
||||
- EXTRA_CORS_EXPOSE_HEADERS=Authorization,Content-Type
|
||||
ports:
|
||||
- "4566:4566"
|
||||
healthcheck:
|
||||
test: ["CMD", "curl", "-f", "http://localhost:4566/_localstack/health"]
|
||||
interval: 10s
|
||||
timeout: 5s
|
||||
retries: 5
|
||||
start_period: 20s
|
||||
|
||||
# AWS-CLI - Used in conjunction with LocalStack to set required permission to send emails
|
||||
aws-cli:
|
||||
image: amazon/aws-cli
|
||||
container_name: aws-cli
|
||||
hostname: aws-cli
|
||||
networks:
|
||||
- redis-cluster-net
|
||||
depends_on:
|
||||
localstack:
|
||||
condition: service_healthy
|
||||
environment:
|
||||
- AWS_ACCESS_KEY_ID=test
|
||||
- AWS_SECRET_ACCESS_KEY=test
|
||||
- AWS_DEFAULT_REGION=ca-central-1
|
||||
entrypoint: /bin/sh -c
|
||||
command: >
|
||||
"
|
||||
aws --endpoint-url=http://localstack:4566 ses verify-domain-identity --domain imex.online --region ca-central-1
|
||||
aws --endpoint-url=http://localstack:4566 ses verify-email-identity --email-address noreply@imex.online --region ca-central-1
|
||||
"
|
||||
# Node App: The Main IMEX API
|
||||
node-app:
|
||||
build:
|
||||
context: .
|
||||
container_name: node-app
|
||||
hostname: imex-api
|
||||
networks:
|
||||
- redis-cluster-net
|
||||
env_file:
|
||||
- .env.development
|
||||
depends_on:
|
||||
redis-node-1:
|
||||
condition: service_healthy
|
||||
redis-node-2:
|
||||
condition: service_healthy
|
||||
redis-node-3:
|
||||
condition: service_healthy
|
||||
localstack:
|
||||
condition: service_healthy
|
||||
aws-cli:
|
||||
condition: service_completed_successfully
|
||||
ports:
|
||||
- "4000:4000"
|
||||
develop:
|
||||
watch:
|
||||
# rebuild image and recreate service
|
||||
- path: package.json
|
||||
action: rebuild
|
||||
volumes:
|
||||
- .:/app
|
||||
- node-app-npm-cache:/app/node_modules
|
||||
|
||||
# ## Optional Container to Observe SocketIO data
|
||||
# socketio-admin-ui:
|
||||
# image: maitrungduc1410/socket.io-admin-ui
|
||||
# container_name: socketio-admin-ui
|
||||
# networks:
|
||||
# - redis-cluster-net
|
||||
# ports:
|
||||
# - "3333:80"
|
||||
|
||||
# ##Optional Container to Observe Redis Cluster Data
|
||||
# redis-insight:
|
||||
# image: redislabs/redisinsight:latest
|
||||
# container_name: redis-insight
|
||||
# hostname: redis-insight
|
||||
# restart: unless-stopped
|
||||
# ports:
|
||||
# - "3334:5540"
|
||||
# networks:
|
||||
# - redis-cluster-net
|
||||
# volumes:
|
||||
# - redis-insight-data:/db
|
||||
|
||||
networks:
|
||||
redis-cluster-net:
|
||||
driver: bridge
|
||||
|
||||
volumes:
|
||||
node-app-npm-cache:
|
||||
redis-node-1-data:
|
||||
redis-node-2-data:
|
||||
redis-node-3-data:
|
||||
redis-lock:
|
||||
redis-insight-data:
|
||||
35
nodemon.json
Normal file
35
nodemon.json
Normal file
@@ -0,0 +1,35 @@
|
||||
{
|
||||
"watch": [
|
||||
"server.js",
|
||||
"server"
|
||||
],
|
||||
"ignore": [
|
||||
"client",
|
||||
".circleci",
|
||||
".platform",
|
||||
".idea",
|
||||
".vscode",
|
||||
"_reference",
|
||||
"firebase",
|
||||
"hasura",
|
||||
"logs",
|
||||
"redis",
|
||||
".dockerignore",
|
||||
".ebignore",
|
||||
".editorconfig",
|
||||
".env.development",
|
||||
".env.development.rome",
|
||||
".eslintrc.json",
|
||||
".gitignore",
|
||||
".npmmrc",
|
||||
".prettierrc.js",
|
||||
"bodyshop_translations.babel",
|
||||
"Dockerfile",
|
||||
"ecosystem.config.js",
|
||||
"job-totals-testing-util.js",
|
||||
"nodemon.json",
|
||||
"package.json",
|
||||
"package-lock.json",
|
||||
"setadmin.js"
|
||||
]
|
||||
}
|
||||
948
package-lock.json
generated
948
package-lock.json
generated
File diff suppressed because it is too large
Load Diff
@@ -19,6 +19,7 @@
|
||||
"makeitpretty": "prettier --write \"**/*.{css,js,json,jsx,scss}\""
|
||||
},
|
||||
"dependencies": {
|
||||
"@aws-sdk/client-elasticache": "^3.665.0",
|
||||
"@aws-sdk/client-secrets-manager": "^3.654.0",
|
||||
"@aws-sdk/client-ses": "^3.654.0",
|
||||
"@aws-sdk/credential-provider-node": "^3.654.0",
|
||||
@@ -46,6 +47,7 @@
|
||||
"graylog2": "^0.2.1",
|
||||
"inline-css": "^4.0.2",
|
||||
"intuit-oauth": "^4.1.2",
|
||||
"ioredis": "^5.4.1",
|
||||
"json-2-csv": "^5.5.5",
|
||||
"lodash": "^4.17.21",
|
||||
"moment": "^2.30.1",
|
||||
|
||||
20
redis/Dockerfile
Normal file
20
redis/Dockerfile
Normal file
@@ -0,0 +1,20 @@
|
||||
# Use the official Redis image
|
||||
FROM redis:7.0-alpine
|
||||
|
||||
# Copy the Redis configuration file
|
||||
COPY redis.conf /usr/local/etc/redis/redis.conf
|
||||
|
||||
# Copy the entrypoint script
|
||||
COPY entrypoint.sh /usr/local/bin/entrypoint.sh
|
||||
|
||||
# Make the entrypoint script executable
|
||||
RUN chmod +x /usr/local/bin/entrypoint.sh
|
||||
|
||||
# Debugging step: List contents of /usr/local/bin
|
||||
RUN ls -l /usr/local/bin
|
||||
|
||||
# Expose Redis ports
|
||||
EXPOSE 6379 16379
|
||||
|
||||
# Set the entrypoint
|
||||
ENTRYPOINT ["entrypoint.sh"]
|
||||
36
redis/entrypoint.sh
Normal file
36
redis/entrypoint.sh
Normal file
@@ -0,0 +1,36 @@
|
||||
#!/bin/sh
|
||||
|
||||
LOCKFILE="/redis-lock/redis-cluster-init.lock"
|
||||
|
||||
# Start Redis server in the background
|
||||
redis-server /usr/local/etc/redis/redis.conf &
|
||||
|
||||
# Wait for Redis server to start
|
||||
sleep 5
|
||||
|
||||
# Only initialize the cluster if the lock file doesn't exist
|
||||
if [ ! -f "$LOCKFILE" ]; then
|
||||
echo "Initializing Redis Cluster..."
|
||||
|
||||
# Create lock file to prevent further initialization attempts
|
||||
touch "$LOCKFILE"
|
||||
if [ $? -eq 0 ]; then
|
||||
echo "Lock file created successfully at $LOCKFILE."
|
||||
else
|
||||
echo "Failed to create lock file."
|
||||
fi
|
||||
|
||||
# Initialize the Redis cluster
|
||||
yes yes | redis-cli --cluster create \
|
||||
redis-node-1:6379 \
|
||||
redis-node-2:6379 \
|
||||
redis-node-3:6379 \
|
||||
--cluster-replicas 0
|
||||
|
||||
echo "Redis Cluster initialized."
|
||||
else
|
||||
echo "Cluster already initialized, skipping initialization."
|
||||
fi
|
||||
|
||||
# Keep the container running
|
||||
tail -f /dev/null
|
||||
6
redis/redis.conf
Normal file
6
redis/redis.conf
Normal file
@@ -0,0 +1,6 @@
|
||||
bind 0.0.0.0
|
||||
port 6379
|
||||
cluster-enabled yes
|
||||
cluster-config-file nodes.conf
|
||||
cluster-node-timeout 5000
|
||||
appendonly yes
|
||||
189
server.js
189
server.js
@@ -1,26 +1,32 @@
|
||||
const express = require("express");
|
||||
const cors = require("cors");
|
||||
const bodyParser = require("body-parser");
|
||||
const path = require("path");
|
||||
const http = require("http");
|
||||
const Redis = require("ioredis");
|
||||
const express = require("express");
|
||||
const bodyParser = require("body-parser");
|
||||
const compression = require("compression");
|
||||
const cookieParser = require("cookie-parser");
|
||||
const http = require("http");
|
||||
const { Server } = require("socket.io");
|
||||
const { createClient } = require("redis");
|
||||
const { createAdapter } = require("@socket.io/redis-adapter");
|
||||
const logger = require("./server/utils/logger");
|
||||
const { redisSocketEvents } = require("./server/web-sockets/redisSocketEvents");
|
||||
const { instrument } = require("@socket.io/admin-ui");
|
||||
|
||||
const { isString, isEmpty } = require("lodash");
|
||||
const applyRedisHelpers = require("./server/utils/redisHelpers");
|
||||
const applyIOHelpers = require("./server/utils/ioHelpers");
|
||||
|
||||
const logger = require("./server/utils/logger");
|
||||
const { applyRedisHelpers } = require("./server/utils/redisHelpers");
|
||||
const { applyIOHelpers } = require("./server/utils/ioHelpers");
|
||||
const { redisSocketEvents } = require("./server/web-sockets/redisSocketEvents");
|
||||
const { ElastiCacheClient, DescribeCacheClustersCommand } = require("@aws-sdk/client-elasticache");
|
||||
const { default: InstanceManager } = require("./server/utils/instanceMgr");
|
||||
|
||||
// Load environment variables
|
||||
require("dotenv").config({
|
||||
path: path.resolve(process.cwd(), `.env.${process.env.NODE_ENV || "development"}`)
|
||||
});
|
||||
|
||||
const CLUSTER_RETRY_BASE_DELAY = 100;
|
||||
const CLUSTER_RETRY_MAX_DELAY = 5000;
|
||||
const CLUSTER_RETRY_JITTER = 100;
|
||||
|
||||
/**
|
||||
* CORS Origin for Socket.IO
|
||||
* @type {string[][]}
|
||||
@@ -49,16 +55,16 @@ const SOCKETIO_CORS_ORIGIN = [
|
||||
"https://old.imex.online",
|
||||
"https://www.old.imex.online",
|
||||
"https://wsadmin.imex.online",
|
||||
"https://www.wsadmin.imex.online",
|
||||
"http://localhost:3333",
|
||||
"https://localhost:3333"
|
||||
"https://www.wsadmin.imex.online"
|
||||
];
|
||||
|
||||
const SOCKETIO_CORS_ORIGIN_DEV = ["http://localhost:3333", "https://localhost:3333"];
|
||||
|
||||
/**
|
||||
* Middleware for Express app
|
||||
* @param app
|
||||
*/
|
||||
const applyMiddleware = (app) => {
|
||||
const applyMiddleware = ({ app }) => {
|
||||
app.use(compression());
|
||||
app.use(cookieParser());
|
||||
app.use(bodyParser.json({ limit: "50mb" }));
|
||||
@@ -76,7 +82,7 @@ const applyMiddleware = (app) => {
|
||||
* Route groupings for Express app
|
||||
* @param app
|
||||
*/
|
||||
const applyRoutes = (app) => {
|
||||
const applyRoutes = ({ app }) => {
|
||||
app.use("/", require("./server/routes/miscellaneousRoutes"));
|
||||
app.use("/notifications", require("./server/routes/notificationsRoutes"));
|
||||
app.use("/render", require("./server/routes/renderRoutes"));
|
||||
@@ -102,37 +108,140 @@ const applyRoutes = (app) => {
|
||||
});
|
||||
};
|
||||
|
||||
/**
|
||||
* Fetch Redis nodes from AWS ElastiCache
|
||||
* @returns {Promise<string[]>}
|
||||
*/
|
||||
const getRedisNodesFromAWS = async () => {
|
||||
const client = new ElastiCacheClient({
|
||||
region: InstanceManager({
|
||||
imex: "ca-central-1",
|
||||
rome: "us-east-2"
|
||||
})
|
||||
});
|
||||
|
||||
const params = {
|
||||
ReplicationGroupId: process.env.REDIS_CLUSTER_ID,
|
||||
ShowCacheNodeInfo: true
|
||||
};
|
||||
|
||||
try {
|
||||
// Fetch the cache clusters associated with the replication group
|
||||
const command = new DescribeCacheClustersCommand(params);
|
||||
const response = await client.send(command);
|
||||
const cacheClusters = response.CacheClusters;
|
||||
|
||||
return cacheClusters.flatMap((cluster) =>
|
||||
cluster.CacheNodes.map((node) => `${node.Endpoint.Address}:${node.Endpoint.Port}`)
|
||||
);
|
||||
} catch (err) {
|
||||
logger.log(`Error fetching Redis nodes from AWS: ${err.message}`, "ERROR", "redis", "api");
|
||||
throw err;
|
||||
}
|
||||
};
|
||||
|
||||
/**
|
||||
* Connect to Redis Cluster
|
||||
* @returns {Promise<unknown>}
|
||||
*/
|
||||
const connectToRedisCluster = async () => {
|
||||
let redisServers;
|
||||
|
||||
if (isString(process.env?.REDIS_CLUSTER_ID) && !isEmpty(process.env?.REDIS_CLUSTER_ID)) {
|
||||
// Fetch Redis nodes from AWS if AWS environment variables are present
|
||||
redisServers = await getRedisNodesFromAWS();
|
||||
} else {
|
||||
// Use the Dockerized Redis cluster in development
|
||||
if (isEmpty(process.env?.REDIS_URL) || !isString(process.env?.REDIS_URL)) {
|
||||
logger.log(`[${process.env.NODE_ENV}] No or Malformed REDIS_URL present.`, "ERROR", "redis", "api");
|
||||
process.exit(1);
|
||||
}
|
||||
try {
|
||||
redisServers = JSON.parse(process.env.REDIS_URL);
|
||||
} catch (error) {
|
||||
logger.log(
|
||||
`[${process.env.NODE_ENV}] Failed to parse REDIS_URL: ${error.message}. Exiting...`,
|
||||
"ERROR",
|
||||
"redis",
|
||||
"api"
|
||||
);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
const clusterRetryStrategy = (times) => {
|
||||
const delay =
|
||||
Math.min(CLUSTER_RETRY_BASE_DELAY + times * 50, CLUSTER_RETRY_MAX_DELAY) + Math.random() * CLUSTER_RETRY_JITTER;
|
||||
logger.log(
|
||||
`[${process.env.NODE_ENV}] Redis cluster not yet ready. Retrying in ${delay.toFixed(2)}ms`,
|
||||
"ERROR",
|
||||
"redis",
|
||||
"api"
|
||||
);
|
||||
return delay;
|
||||
};
|
||||
|
||||
const redisCluster = new Redis.Cluster(redisServers, {
|
||||
clusterRetryStrategy,
|
||||
enableAutoPipelining: true,
|
||||
enableReadyCheck: true,
|
||||
redisOptions: {
|
||||
// connectTimeout: 10000, // Timeout for connecting in ms
|
||||
// idleTimeoutMillis: 30000, // Close idle connections after 30s
|
||||
// maxRetriesPerRequest: 5 // Retry a maximum of 5 times per request
|
||||
}
|
||||
});
|
||||
|
||||
return new Promise((resolve, reject) => {
|
||||
redisCluster.on("ready", () => {
|
||||
logger.log(`[${process.env.NODE_ENV}] Redis cluster connection established.`, "INFO", "redis", "api");
|
||||
resolve(redisCluster);
|
||||
});
|
||||
|
||||
redisCluster.on("error", (err) => {
|
||||
logger.log(`[${process.env.NODE_ENV}] Redis cluster connection failed: ${err.message}`, "ERROR", "redis", "api");
|
||||
reject(err);
|
||||
});
|
||||
});
|
||||
};
|
||||
|
||||
/**
|
||||
* Apply Redis to the server
|
||||
* @param server
|
||||
* @param app
|
||||
*/
|
||||
const applySocketIO = async (server, app) => {
|
||||
// Redis client setup for Pub/Sub and Key-Value Store
|
||||
const pubClient = createClient({ url: process.env.REDIS_URL || "redis://localhost:6379" });
|
||||
const applySocketIO = async ({ server, app }) => {
|
||||
const redisCluster = await connectToRedisCluster();
|
||||
|
||||
// Handle errors
|
||||
redisCluster.on("error", (err) => {
|
||||
logger.log(`[${process.env.NODE_ENV}] Redis ERROR`, "ERROR", "redis", "api");
|
||||
});
|
||||
|
||||
const pubClient = redisCluster;
|
||||
const subClient = pubClient.duplicate();
|
||||
|
||||
pubClient.on("error", (err) => logger.log(`Redis pubClient error: ${err}`, "ERROR", "redis"));
|
||||
subClient.on("error", (err) => logger.log(`Redis subClient error: ${err}`, "ERROR", "redis"));
|
||||
|
||||
try {
|
||||
await Promise.all([pubClient.connect(), subClient.connect()]);
|
||||
logger.log(`[${process.env.NODE_ENV}] Connected to Redis`, "INFO", "redis", "api");
|
||||
} catch (redisError) {
|
||||
logger.log("Failed to connect to Redis", "ERROR", "redis", redisError);
|
||||
}
|
||||
|
||||
process.on("SIGINT", async () => {
|
||||
logger.log("Closing Redis connections...", "INFO", "redis", "api");
|
||||
try {
|
||||
await Promise.all([pubClient.disconnect(), subClient.disconnect()]);
|
||||
process.exit(0);
|
||||
logger.log("Redis connections closed. Process will exit.", "INFO", "redis", "api");
|
||||
} catch (error) {
|
||||
logger.log(`Error closing Redis connections: ${error.message}`, "ERROR", "redis", "api");
|
||||
}
|
||||
});
|
||||
|
||||
const ioRedis = new Server(server, {
|
||||
path: "/wss",
|
||||
adapter: createAdapter(pubClient, subClient),
|
||||
cors: {
|
||||
origin: SOCKETIO_CORS_ORIGIN,
|
||||
origin:
|
||||
process.env?.NODE_ENV === "development"
|
||||
? [...SOCKETIO_CORS_ORIGIN, ...SOCKETIO_CORS_ORIGIN_DEV]
|
||||
: SOCKETIO_CORS_ORIGIN,
|
||||
methods: ["GET", "POST"],
|
||||
credentials: true,
|
||||
exposedHeaders: ["set-cookie"]
|
||||
@@ -147,6 +256,7 @@ const applySocketIO = async (server, app) => {
|
||||
username: "admin",
|
||||
password: process.env.REDIS_ADMIN_PASS
|
||||
},
|
||||
|
||||
mode: process.env.REDIS_ADMIN_MODE || "development"
|
||||
});
|
||||
}
|
||||
@@ -161,19 +271,22 @@ const applySocketIO = async (server, app) => {
|
||||
}
|
||||
});
|
||||
|
||||
app.use((req, res, next) => {
|
||||
Object.assign(req, {
|
||||
const api = {
|
||||
pubClient,
|
||||
io,
|
||||
ioRedis
|
||||
});
|
||||
ioRedis,
|
||||
redisCluster
|
||||
};
|
||||
|
||||
app.use((req, res, next) => {
|
||||
Object.assign(req, api);
|
||||
|
||||
next();
|
||||
});
|
||||
|
||||
Object.assign(module.exports, { io, pubClient, ioRedis });
|
||||
Object.assign(module.exports, api);
|
||||
|
||||
return { pubClient, io, ioRedis };
|
||||
return api;
|
||||
};
|
||||
|
||||
/**
|
||||
@@ -186,16 +299,16 @@ const main = async () => {
|
||||
|
||||
const server = http.createServer(app);
|
||||
|
||||
const { pubClient, ioRedis } = await applySocketIO(server, app);
|
||||
const api = applyRedisHelpers(pubClient, app, logger);
|
||||
const ioHelpers = applyIOHelpers(app, api, ioRedis, logger);
|
||||
const { pubClient, ioRedis } = await applySocketIO({ server, app });
|
||||
const redisHelpers = applyRedisHelpers({ pubClient, app, logger });
|
||||
const ioHelpers = applyIOHelpers({ app, redisHelpers, ioRedis, logger });
|
||||
|
||||
// Legacy Socket Events
|
||||
require("./server/web-sockets/web-socket");
|
||||
|
||||
applyMiddleware(app);
|
||||
applyRoutes(app);
|
||||
redisSocketEvents(ioRedis, api, ioHelpers);
|
||||
applyMiddleware({ app });
|
||||
applyRoutes({ app });
|
||||
redisSocketEvents({ io: ioRedis, redisHelpers, ioHelpers, logger });
|
||||
|
||||
try {
|
||||
await server.listen(port);
|
||||
|
||||
31
server/email/mailer.js
Normal file
31
server/email/mailer.js
Normal file
@@ -0,0 +1,31 @@
|
||||
const { isString, isEmpty } = require("lodash");
|
||||
const { defaultProvider } = require("@aws-sdk/credential-provider-node");
|
||||
const { default: InstanceManager } = require("../utils/instanceMgr");
|
||||
const aws = require("@aws-sdk/client-ses");
|
||||
const nodemailer = require("nodemailer");
|
||||
|
||||
const isLocal = isString(process.env?.LOCALSTACK_HOSTNAME) && !isEmpty(process.env?.LOCALSTACK_HOSTNAME);
|
||||
|
||||
const sesConfig = {
|
||||
apiVersion: "latest",
|
||||
credentials: defaultProvider(),
|
||||
region: isLocal
|
||||
? "ca-central-1"
|
||||
: InstanceManager({
|
||||
imex: "ca-central-1",
|
||||
rome: "us-east-2"
|
||||
})
|
||||
};
|
||||
|
||||
if (isLocal) {
|
||||
sesConfig.endpoint = `http://${process.env.LOCALSTACK_HOSTNAME}:4566`;
|
||||
console.log(`SES Mailer set to LocalStack end point: ${sesConfig.endpoint}`);
|
||||
}
|
||||
|
||||
const ses = new aws.SES(sesConfig);
|
||||
|
||||
let transporter = nodemailer.createTransport({
|
||||
SES: { ses, aws }
|
||||
});
|
||||
|
||||
module.exports = transporter;
|
||||
@@ -3,30 +3,13 @@ require("dotenv").config({
|
||||
path: path.resolve(process.cwd(), `.env.${process.env.NODE_ENV || "development"}`)
|
||||
});
|
||||
const axios = require("axios");
|
||||
let nodemailer = require("nodemailer");
|
||||
let aws = require("@aws-sdk/client-ses");
|
||||
let { defaultProvider } = require("@aws-sdk/credential-provider-node");
|
||||
const InstanceManager = require("../utils/instanceMgr").default;
|
||||
const logger = require("../utils/logger");
|
||||
const client = require("../graphql-client/graphql-client").client;
|
||||
const queries = require("../graphql-client/queries");
|
||||
const { isObject } = require("lodash");
|
||||
const generateEmailTemplate = require("./generateTemplate");
|
||||
const moment = require("moment");
|
||||
|
||||
const ses = new aws.SES({
|
||||
// The key apiVersion is no longer supported in v3, and can be removed.
|
||||
// @deprecated The client uses the "latest" apiVersion.
|
||||
apiVersion: "latest",
|
||||
defaultProvider,
|
||||
region: InstanceManager({
|
||||
imex: "ca-central-1",
|
||||
rome: "us-east-2"
|
||||
})
|
||||
});
|
||||
let transporter = nodemailer.createTransport({
|
||||
SES: { ses, aws }
|
||||
});
|
||||
const mailer = require("./mailer");
|
||||
|
||||
// Get the image from the URL and return it as a base64 string
|
||||
const getImage = async (imageUrl) => {
|
||||
@@ -66,7 +49,7 @@ const logEmail = async (req, email) => {
|
||||
const sendServerEmail = async ({ subject, text }) => {
|
||||
if (process.env.NODE_ENV === undefined) return;
|
||||
try {
|
||||
transporter.sendMail(
|
||||
mailer.sendMail(
|
||||
{
|
||||
from: InstanceManager({
|
||||
imex: `ImEX Online API - ${process.env.NODE_ENV} <noreply@imex.online>`,
|
||||
@@ -96,9 +79,9 @@ const sendServerEmail = async ({ subject, text }) => {
|
||||
}
|
||||
};
|
||||
|
||||
const sendProManagerWelcomeEmail = async ({to, subject, html}) => {
|
||||
const sendProManagerWelcomeEmail = async ({ to, subject, html }) => {
|
||||
try {
|
||||
await transporter.sendMail({
|
||||
await mailer.sendMail({
|
||||
from: `ProManager <noreply@promanager.web-est.com>`,
|
||||
to,
|
||||
subject,
|
||||
@@ -112,7 +95,7 @@ const sendProManagerWelcomeEmail = async ({to, subject, html}) => {
|
||||
|
||||
const sendTaskEmail = async ({ to, subject, type = "text", html, text, attachments }) => {
|
||||
try {
|
||||
transporter.sendMail(
|
||||
mailer.sendMail(
|
||||
{
|
||||
from: InstanceManager({
|
||||
imex: `ImEX Online <noreply@imex.online>`,
|
||||
@@ -166,7 +149,7 @@ const sendEmail = async (req, res) => {
|
||||
);
|
||||
}
|
||||
|
||||
transporter.sendMail(
|
||||
mailer.sendMail(
|
||||
{
|
||||
from: `${req.body.from.name} <${req.body.from.address}>`,
|
||||
replyTo: req.body.ReplyTo.Email,
|
||||
@@ -280,7 +263,7 @@ const emailBounce = async (req, res) => {
|
||||
status: "Bounced",
|
||||
context: message.bounce?.bouncedRecipients
|
||||
});
|
||||
transporter.sendMail(
|
||||
mailer.sendMail(
|
||||
{
|
||||
from: InstanceMgr({
|
||||
imex: `ImEX Online <noreply@imex.online>`,
|
||||
|
||||
@@ -2,30 +2,14 @@ const path = require("path");
|
||||
require("dotenv").config({
|
||||
path: path.resolve(process.cwd(), `.env.${process.env.NODE_ENV || "development"}`)
|
||||
});
|
||||
let nodemailer = require("nodemailer");
|
||||
let aws = require("@aws-sdk/client-ses");
|
||||
let { defaultProvider } = require("@aws-sdk/credential-provider-node");
|
||||
const InstanceManager = require("../utils/instanceMgr").default;
|
||||
const logger = require("../utils/logger");
|
||||
const client = require("../graphql-client/graphql-client").client;
|
||||
const queries = require("../graphql-client/queries");
|
||||
const generateEmailTemplate = require("./generateTemplate");
|
||||
const moment = require("moment");
|
||||
const moment = require("moment-timezone");
|
||||
const { taskEmailQueue } = require("./tasksEmailsQueue");
|
||||
|
||||
const ses = new aws.SES({
|
||||
apiVersion: "latest",
|
||||
defaultProvider,
|
||||
region: InstanceManager({
|
||||
imex: "ca-central-1",
|
||||
rome: "us-east-2"
|
||||
})
|
||||
});
|
||||
|
||||
const transporter = nodemailer.createTransport({
|
||||
SES: { ses, aws },
|
||||
sendingRate: InstanceManager({ imex: 40, rome: 10 })
|
||||
});
|
||||
const mailer = require("./mailer");
|
||||
|
||||
// Initialize the Tasks Email Queue
|
||||
const tasksEmailQueue = taskEmailQueue();
|
||||
@@ -45,22 +29,20 @@ if (process.env.NODE_ENV !== "development") {
|
||||
// Handling SIGINT (e.g., Ctrl+C)
|
||||
process.on("SIGINT", async () => {
|
||||
await tasksEmailQueueCleanup();
|
||||
process.exit(0);
|
||||
});
|
||||
// Handling SIGTERM (e.g., sent by system shutdown)
|
||||
process.on("SIGTERM", async () => {
|
||||
await tasksEmailQueueCleanup();
|
||||
process.exit(0);
|
||||
});
|
||||
// Handling uncaught exceptions
|
||||
process.on("uncaughtException", async (err) => {
|
||||
await tasksEmailQueueCleanup();
|
||||
process.exit(1); // Exit with an 'error' code
|
||||
throw err;
|
||||
});
|
||||
// Handling unhandled promise rejections
|
||||
process.on("unhandledRejection", async (reason, promise) => {
|
||||
await tasksEmailQueueCleanup();
|
||||
process.exit(1); // Exit with an 'error' code
|
||||
throw reason;
|
||||
});
|
||||
}
|
||||
|
||||
@@ -108,12 +90,13 @@ const getEndpoints = (bodyshop) =>
|
||||
: "https://romeonline.io"
|
||||
});
|
||||
|
||||
const generateTemplateArgs = (title, priority, description, dueDate, bodyshop, job, taskId) => {
|
||||
const generateTemplateArgs = (title, priority, description, dueDate, bodyshop, job, taskId, dateLine) => {
|
||||
const endPoints = getEndpoints(bodyshop);
|
||||
return {
|
||||
header: title,
|
||||
subHeader: `Body Shop: ${bodyshop.shopname} | Priority: ${formatPriority(priority)} ${formatDate(dueDate)}`,
|
||||
body: `Reference: ${job.ro_number || "N/A"} | ${job.ownr_co_nm ? job.ownr_co_nm : `${job.ownr_fn || ""} ${job.ownr_ln || ""}`.trim()} | ${`${job.v_model_yr || ""} ${job.v_make_desc || ""} ${job.v_model_desc || ""}`.trim()}<br>${description ? description.concat("<br>") : ""}<a href="${endPoints}/manage/tasks/alltasks?taskid=${taskId}">View this task.</a>`
|
||||
body: `Reference: ${job.ro_number || "N/A"} | ${job.ownr_co_nm ? job.ownr_co_nm : `${job.ownr_fn || ""} ${job.ownr_ln || ""}`.trim()} | ${`${job.v_model_yr || ""} ${job.v_make_desc || ""} ${job.v_model_desc || ""}`.trim()}<br>${description ? description.concat("<br>") : ""}<a href="${endPoints}/manage/tasks/alltasks?taskid=${taskId}">View this task.</a>`,
|
||||
dateLine
|
||||
};
|
||||
};
|
||||
|
||||
@@ -125,6 +108,7 @@ const generateTemplateArgs = (title, priority, description, dueDate, bodyshop, j
|
||||
* @param html
|
||||
* @param taskIds
|
||||
* @param successCallback
|
||||
* @param requestInstance
|
||||
*/
|
||||
const sendMail = (type, to, subject, html, taskIds, successCallback, requestInstance) => {
|
||||
const fromEmails = InstanceManager({
|
||||
@@ -135,7 +119,7 @@ const sendMail = (type, to, subject, html, taskIds, successCallback, requestInst
|
||||
: "Rome Online <noreply@romeonline.io>"
|
||||
});
|
||||
|
||||
transporter.sendMail(
|
||||
mailer.sendMail(
|
||||
{
|
||||
from: fromEmails,
|
||||
to,
|
||||
@@ -152,8 +136,6 @@ const sendMail = (type, to, subject, html, taskIds, successCallback, requestInst
|
||||
}
|
||||
}
|
||||
);
|
||||
// }
|
||||
// });
|
||||
};
|
||||
|
||||
/**
|
||||
@@ -178,6 +160,8 @@ const taskAssignedEmail = async (req, res) => {
|
||||
id: newTask.id
|
||||
});
|
||||
|
||||
const dateLine = moment().tz(tasks_by_pk.bodyshop.timezone).format("M/DD/YYYY @ hh:mm a");
|
||||
|
||||
sendMail(
|
||||
"assigned",
|
||||
tasks_by_pk.assigned_to_employee.user_email,
|
||||
@@ -190,7 +174,8 @@ const taskAssignedEmail = async (req, res) => {
|
||||
newTask.due_date,
|
||||
tasks_by_pk.bodyshop,
|
||||
tasks_by_pk.job,
|
||||
newTask.id
|
||||
newTask.id,
|
||||
dateLine
|
||||
)
|
||||
),
|
||||
null,
|
||||
@@ -259,6 +244,8 @@ const tasksRemindEmail = async (req, res) => {
|
||||
|
||||
const taskIds = groupedTasks[recipient.email].map((task) => task.id);
|
||||
|
||||
const dateLine = moment().tz(tasksRequest?.tasks[0].bodyshop.timezone).format("M/DD/YYYY @ hh:mm a");
|
||||
|
||||
// There is only the one email to send to this author.
|
||||
if (recipient.count === 1) {
|
||||
const onlyTask = groupedTasks[recipient.email][0];
|
||||
@@ -274,7 +261,8 @@ const tasksRemindEmail = async (req, res) => {
|
||||
onlyTask.due_date,
|
||||
onlyTask.bodyshop,
|
||||
onlyTask.job,
|
||||
onlyTask.id
|
||||
onlyTask.id,
|
||||
dateLine
|
||||
)
|
||||
);
|
||||
}
|
||||
@@ -297,6 +285,7 @@ const tasksRemindEmail = async (req, res) => {
|
||||
emailData.html = generateEmailTemplate({
|
||||
header: `${allTasks.length} Tasks require your attention`,
|
||||
subHeader: `Please click on the Tasks below to view the Task.`,
|
||||
dateLine,
|
||||
body: `<ul>
|
||||
${allTasks
|
||||
.map((task) =>
|
||||
@@ -335,6 +324,29 @@ const tasksRemindEmail = async (req, res) => {
|
||||
}
|
||||
};
|
||||
|
||||
// Note: Uncomment this to test locally, it will call the remind_at email check every 20 seconds
|
||||
// const callTaskRemindEmailInternally = () => {
|
||||
// const req = {
|
||||
// body: {
|
||||
// // You can mock any request data here if needed
|
||||
// }
|
||||
// };
|
||||
//
|
||||
// const res = {
|
||||
// status: (code) => {
|
||||
// return {
|
||||
// json: (data) => {
|
||||
// console.log(`Response Status: ${code}`, data);
|
||||
// }
|
||||
// };
|
||||
// }
|
||||
// };
|
||||
//
|
||||
// // Call the taskRemindEmail function with mock req and res
|
||||
// tasksRemindEmail(req, res);
|
||||
// };
|
||||
// setInterval(callTaskRemindEmailInternally, 20000);
|
||||
|
||||
module.exports = {
|
||||
taskAssignedEmail,
|
||||
tasksRemindEmail,
|
||||
|
||||
@@ -13,7 +13,7 @@ const taskEmailQueue = () =>
|
||||
console.log("Processing reminds for taskIds: ", taskIds.join(", "));
|
||||
|
||||
// Set the remind_at_sent to the current time.
|
||||
const now = moment().toISOString();
|
||||
const now = moment.utc().toISOString();
|
||||
|
||||
client
|
||||
.request(UPDATE_TASKS_REMIND_AT_SENT, { taskIds, now })
|
||||
|
||||
@@ -2489,6 +2489,7 @@ exports.QUERY_REMIND_TASKS = `
|
||||
bodyshop {
|
||||
shopname
|
||||
convenient_company
|
||||
timezone
|
||||
}
|
||||
bodyshopid
|
||||
}
|
||||
@@ -2512,6 +2513,7 @@ query QUERY_TASK_BY_ID($id: uuid!) {
|
||||
bodyshop{
|
||||
shopname
|
||||
convenient_company
|
||||
timezone
|
||||
}
|
||||
job{
|
||||
ro_number
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
const applyIOHelpers = (app, api, io, logger) => {
|
||||
const applyIOHelpers = ({ app, api, io, logger }) => {
|
||||
const getBodyshopRoom = (bodyshopID) => `bodyshop-broadcast-room:${bodyshopID}`;
|
||||
|
||||
const ioHelpersAPI = {
|
||||
@@ -14,4 +14,4 @@ const applyIOHelpers = (app, api, io, logger) => {
|
||||
return ioHelpersAPI;
|
||||
};
|
||||
|
||||
module.exports = applyIOHelpers;
|
||||
module.exports = { applyIOHelpers };
|
||||
|
||||
@@ -1,14 +1,14 @@
|
||||
const logger = require("./logger");
|
||||
/**
|
||||
* Apply Redis helper functions
|
||||
* @param pubClient
|
||||
* @param app
|
||||
* @param logger
|
||||
*/
|
||||
const applyRedisHelpers = (pubClient, app, logger) => {
|
||||
const applyRedisHelpers = ({ pubClient, app, logger }) => {
|
||||
// Store session data in Redis
|
||||
const setSessionData = async (socketId, key, value) => {
|
||||
try {
|
||||
await pubClient.hSet(`socket:${socketId}`, key, JSON.stringify(value)); // Use Redis pubClient
|
||||
await pubClient.hset(`socket:${socketId}`, key, JSON.stringify(value)); // Use Redis pubClient
|
||||
} catch (error) {
|
||||
logger.log(`Error Setting Session Data for socket ${socketId}: ${error}`, "ERROR", "redis");
|
||||
}
|
||||
@@ -17,7 +17,7 @@ const applyRedisHelpers = (pubClient, app, logger) => {
|
||||
// Retrieve session data from Redis
|
||||
const getSessionData = async (socketId, key) => {
|
||||
try {
|
||||
const data = await pubClient.hGet(`socket:${socketId}`, key);
|
||||
const data = await pubClient.hget(`socket:${socketId}`, key);
|
||||
return data ? JSON.parse(data) : null;
|
||||
} catch (error) {
|
||||
logger.log(`Error Getting Session Data for socket ${socketId}: ${error}`, "ERROR", "redis");
|
||||
@@ -38,7 +38,7 @@ const applyRedisHelpers = (pubClient, app, logger) => {
|
||||
try {
|
||||
// keyValues is expected to be an object { key1: value1, key2: value2, ... }
|
||||
const entries = Object.entries(keyValues).map(([key, value]) => [key, JSON.stringify(value)]);
|
||||
await pubClient.hSet(`socket:${socketId}`, ...entries.flat());
|
||||
await pubClient.hset(`socket:${socketId}`, ...entries.flat());
|
||||
} catch (error) {
|
||||
logger.log(`Error Setting Multiple Session Data for socket ${socketId}: ${error}`, "ERROR", "redis");
|
||||
}
|
||||
@@ -47,7 +47,7 @@ const applyRedisHelpers = (pubClient, app, logger) => {
|
||||
// Retrieve multiple session data from Redis
|
||||
const getMultipleSessionData = async (socketId, keys) => {
|
||||
try {
|
||||
const data = await pubClient.hmGet(`socket:${socketId}`, keys);
|
||||
const data = await pubClient.hmget(`socket:${socketId}`, keys);
|
||||
// Redis returns an object with null values for missing keys, so we parse the non-null ones
|
||||
return Object.fromEntries(keys.map((key, index) => [key, data[index] ? JSON.parse(data[index]) : null]));
|
||||
} catch (error) {
|
||||
@@ -60,7 +60,7 @@ const applyRedisHelpers = (pubClient, app, logger) => {
|
||||
// Use Redis multi/pipeline to batch the commands
|
||||
const multi = pubClient.multi();
|
||||
keyValueArray.forEach(([key, value]) => {
|
||||
multi.hSet(`socket:${socketId}`, key, JSON.stringify(value));
|
||||
multi.hset(`socket:${socketId}`, key, JSON.stringify(value));
|
||||
});
|
||||
await multi.exec(); // Execute all queued commands
|
||||
} catch (error) {
|
||||
@@ -71,7 +71,7 @@ const applyRedisHelpers = (pubClient, app, logger) => {
|
||||
// Helper function to add an item to the end of the Redis list
|
||||
const addItemToEndOfList = async (socketId, key, newItem) => {
|
||||
try {
|
||||
await pubClient.rPush(`socket:${socketId}:${key}`, JSON.stringify(newItem));
|
||||
await pubClient.rpush(`socket:${socketId}:${key}`, JSON.stringify(newItem));
|
||||
} catch (error) {
|
||||
logger.log(`Error adding item to the end of the list for socket ${socketId}: ${error}`, "ERROR", "redis");
|
||||
}
|
||||
@@ -80,7 +80,7 @@ const applyRedisHelpers = (pubClient, app, logger) => {
|
||||
// Helper function to add an item to the beginning of the Redis list
|
||||
const addItemToBeginningOfList = async (socketId, key, newItem) => {
|
||||
try {
|
||||
await pubClient.lPush(`socket:${socketId}:${key}`, JSON.stringify(newItem));
|
||||
await pubClient.lpush(`socket:${socketId}:${key}`, JSON.stringify(newItem));
|
||||
} catch (error) {
|
||||
logger.log(`Error adding item to the beginning of the list for socket ${socketId}: ${error}`, "ERROR", "redis");
|
||||
}
|
||||
@@ -98,7 +98,7 @@ const applyRedisHelpers = (pubClient, app, logger) => {
|
||||
// Add methods to manage room users
|
||||
const addUserToRoom = async (room, user) => {
|
||||
try {
|
||||
await pubClient.sAdd(room, JSON.stringify(user));
|
||||
await pubClient.sadd(room, JSON.stringify(user));
|
||||
} catch (error) {
|
||||
logger.log(`Error adding user to room ${room}: ${error}`, "ERROR", "redis");
|
||||
}
|
||||
@@ -106,7 +106,7 @@ const applyRedisHelpers = (pubClient, app, logger) => {
|
||||
|
||||
const removeUserFromRoom = async (room, user) => {
|
||||
try {
|
||||
await pubClient.sRem(room, JSON.stringify(user));
|
||||
await pubClient.srem(room, JSON.stringify(user));
|
||||
} catch (error) {
|
||||
logger.log(`Error removing user to room ${room}: ${error}`, "ERROR", "redis");
|
||||
}
|
||||
@@ -114,7 +114,7 @@ const applyRedisHelpers = (pubClient, app, logger) => {
|
||||
|
||||
const getUsersInRoom = async (room) => {
|
||||
try {
|
||||
const users = await pubClient.sMembers(room);
|
||||
const users = await pubClient.smembers(room);
|
||||
return users.map((user) => JSON.parse(user));
|
||||
} catch (error) {
|
||||
logger.log(`Error getting users in room ${room}: ${error}`, "ERROR", "redis");
|
||||
@@ -143,70 +143,87 @@ const applyRedisHelpers = (pubClient, app, logger) => {
|
||||
next();
|
||||
});
|
||||
|
||||
// // Demo to show how all the helper functions work
|
||||
// Demo to show how all the helper functions work
|
||||
// const demoSessionData = async () => {
|
||||
// const socketId = "testSocketId";
|
||||
//
|
||||
// // Store session data using setSessionData
|
||||
// await exports.setSessionData(socketId, "field1", "Hello, Redis!");
|
||||
//
|
||||
// // Retrieve session data using getSessionData
|
||||
// const field1Value = await exports.getSessionData(socketId, "field1");
|
||||
// // 1. Test setSessionData and getSessionData
|
||||
// await setSessionData(socketId, "field1", "Hello, Redis!");
|
||||
// const field1Value = await getSessionData(socketId, "field1");
|
||||
// console.log("Retrieved single field value:", field1Value);
|
||||
//
|
||||
// // Store multiple session data using setMultipleSessionData
|
||||
// await exports.setMultipleSessionData(socketId, { field2: "Second Value", field3: "Third Value" });
|
||||
//
|
||||
// // Retrieve multiple session data using getMultipleSessionData
|
||||
// const multipleFields = await exports.getMultipleSessionData(socketId, ["field2", "field3"]);
|
||||
// // 2. Test setMultipleSessionData and getMultipleSessionData
|
||||
// await setMultipleSessionData(socketId, { field2: "Second Value", field3: "Third Value" });
|
||||
// const multipleFields = await getMultipleSessionData(socketId, ["field2", "field3"]);
|
||||
// console.log("Retrieved multiple field values:", multipleFields);
|
||||
//
|
||||
// // Store multiple session data using setMultipleFromArraySessionData
|
||||
// await exports.setMultipleFromArraySessionData(socketId, [
|
||||
// // 3. Test setMultipleFromArraySessionData
|
||||
// await setMultipleFromArraySessionData(socketId, [
|
||||
// ["field4", "Fourth Value"],
|
||||
// ["field5", "Fifth Value"]
|
||||
// ]);
|
||||
//
|
||||
// // Retrieve and log all fields
|
||||
// const allFields = await exports.getMultipleSessionData(socketId, [
|
||||
// "field1",
|
||||
// "field2",
|
||||
// "field3",
|
||||
// "field4",
|
||||
// "field5"
|
||||
// ]);
|
||||
// const allFields = await getMultipleSessionData(socketId, ["field1", "field2", "field3", "field4", "field5"]);
|
||||
// console.log("Retrieved all field values:", allFields);
|
||||
//
|
||||
// // 4. Test list functions
|
||||
// // Add item to the end of a Redis list
|
||||
// await exports.addItemToEndOfList(socketId, "logEvents", { event: "Log Event 1", timestamp: new Date() });
|
||||
// await exports.addItemToEndOfList(socketId, "logEvents", { event: "Log Event 2", timestamp: new Date() });
|
||||
// await addItemToEndOfList(socketId, "logEvents", { event: "Log Event 1", timestamp: new Date() });
|
||||
// await addItemToEndOfList(socketId, "logEvents", { event: "Log Event 2", timestamp: new Date() });
|
||||
//
|
||||
// // Add item to the beginning of a Redis list
|
||||
// await exports.addItemToBeginningOfList(socketId, "logEvents", { event: "First Log Event", timestamp: new Date() });
|
||||
// await addItemToBeginningOfList(socketId, "logEvents", { event: "First Log Event", timestamp: new Date() });
|
||||
//
|
||||
// // Retrieve the entire list (using lRange)
|
||||
// const logEvents = await pubClient.lRange(`socket:${socketId}:logEvents`, 0, -1);
|
||||
// console.log("Log Events List:", logEvents.map(JSON.parse));
|
||||
// // Retrieve the entire list
|
||||
// const logEventsData = await pubClient.lrange(`socket:${socketId}:logEvents`, 0, -1);
|
||||
// const logEvents = logEventsData.map((item) => JSON.parse(item));
|
||||
// console.log("Log Events List:", logEvents);
|
||||
//
|
||||
// // **Add the new code below to test clearList**
|
||||
//
|
||||
// // Clear the list using clearList
|
||||
// await exports.clearList(socketId, "logEvents");
|
||||
// // 5. Test clearList
|
||||
// await clearList(socketId, "logEvents");
|
||||
// console.log("Log Events List cleared.");
|
||||
//
|
||||
// // Retrieve the list after clearing to confirm it's empty
|
||||
// const logEventsAfterClear = await pubClient.lRange(`socket:${socketId}:logEvents`, 0, -1);
|
||||
// const logEventsAfterClear = await pubClient.lrange(`socket:${socketId}:logEvents`, 0, -1);
|
||||
// console.log("Log Events List after clearing:", logEventsAfterClear); // Should be an empty array
|
||||
//
|
||||
// // Clear session data
|
||||
// await exports.clearSessionData(socketId);
|
||||
// // 6. Test clearSessionData
|
||||
// await clearSessionData(socketId);
|
||||
// console.log("Session data cleared.");
|
||||
// };
|
||||
//
|
||||
// // 7. Test room functions
|
||||
// const roomName = "testRoom";
|
||||
// const user1 = { id: 1, name: "Alice" };
|
||||
// const user2 = { id: 2, name: "Bob" };
|
||||
//
|
||||
// // Add users to room
|
||||
// await addUserToRoom(roomName, user1);
|
||||
// await addUserToRoom(roomName, user2);
|
||||
//
|
||||
// // Get users in room
|
||||
// const usersInRoom = await getUsersInRoom(roomName);
|
||||
// console.log(`Users in room ${roomName}:`, usersInRoom);
|
||||
//
|
||||
// // Remove a user from room
|
||||
// await removeUserFromRoom(roomName, user1);
|
||||
//
|
||||
// // Get users in room after removal
|
||||
// const usersInRoomAfterRemoval = await getUsersInRoom(roomName);
|
||||
// console.log(`Users in room ${roomName} after removal:`, usersInRoomAfterRemoval);
|
||||
//
|
||||
// // Clean up: remove remaining users from room
|
||||
// await removeUserFromRoom(roomName, user2);
|
||||
//
|
||||
// // Verify room is empty
|
||||
// const usersInRoomAfterCleanup = await getUsersInRoom(roomName);
|
||||
// console.log(`Users in room ${roomName} after cleanup:`, usersInRoomAfterCleanup); // Should be empty
|
||||
// };
|
||||
// if (process.env.NODE_ENV === "development") {
|
||||
// demoSessionData();
|
||||
// }
|
||||
// "th1s1sr3d1s" (BCrypt)
|
||||
|
||||
return api;
|
||||
};
|
||||
module.exports = applyRedisHelpers;
|
||||
|
||||
module.exports = { applyRedisHelpers };
|
||||
|
||||
@@ -1,102 +1,18 @@
|
||||
const path = require("path");
|
||||
require("dotenv").config({
|
||||
path: path.resolve(process.cwd(), `.env.${process.env.NODE_ENV || "development"}`)
|
||||
});
|
||||
const logger = require("../utils/logger");
|
||||
const { admin } = require("../firebase/firebase-handler");
|
||||
|
||||
// Logging helper functions
|
||||
function createLogEvent(socket, level, message) {
|
||||
console.log(`[IOREDIS LOG EVENT] - ${socket.user.email} - ${socket.id} - ${message}`);
|
||||
logger.log("ioredis-log-event", level, socket.user.email, null, { wsmessage: message });
|
||||
}
|
||||
|
||||
const registerUpdateEvents = (socket) => {
|
||||
socket.on("update-token", async (newToken) => {
|
||||
try {
|
||||
socket.user = await admin.auth().verifyIdToken(newToken);
|
||||
createLogEvent(socket, "INFO", "Token updated successfully");
|
||||
socket.emit("token-updated", { success: true });
|
||||
} catch (error) {
|
||||
createLogEvent(socket, "ERROR", `Token update failed: ${error.message}`);
|
||||
socket.emit("token-updated", { success: false, error: error.message });
|
||||
// Optionally disconnect the socket if token verification fails
|
||||
socket.disconnect();
|
||||
}
|
||||
});
|
||||
};
|
||||
|
||||
const redisSocketEvents = (io, { addUserToRoom, getUsersInRoom, removeUserFromRoom }, { getBodyshopRoom }) => {
|
||||
// Room management and broadcasting events
|
||||
function registerRoomAndBroadcastEvents(socket) {
|
||||
socket.on("join-bodyshop-room", async (bodyshopUUID) => {
|
||||
try {
|
||||
const room = getBodyshopRoom(bodyshopUUID);
|
||||
socket.join(room);
|
||||
await addUserToRoom(room, { uid: socket.user.uid, email: socket.user.email, socket: socket.id });
|
||||
createLogEvent(socket, "DEBUG", `Client joined bodyshop room: ${room}`);
|
||||
|
||||
// Notify all users in the room about the updated user list
|
||||
const usersInRoom = await getUsersInRoom(room);
|
||||
io.to(room).emit("room-users-updated", usersInRoom);
|
||||
} catch (error) {
|
||||
createLogEvent(socket, "ERROR", `Error joining room: ${error}`);
|
||||
}
|
||||
});
|
||||
|
||||
socket.on("leave-bodyshop-room", async (bodyshopUUID) => {
|
||||
try {
|
||||
const room = getBodyshopRoom(bodyshopUUID);
|
||||
socket.leave(room);
|
||||
createLogEvent(socket, "DEBUG", `Client left bodyshop room: ${room}`);
|
||||
} catch (error) {
|
||||
createLogEvent(socket, "ERROR", `Error joining room: ${error}`);
|
||||
}
|
||||
});
|
||||
|
||||
socket.on("get-room-users", async (bodyshopUUID, callback) => {
|
||||
try {
|
||||
const usersInRoom = await getUsersInRoom(getBodyshopRoom(bodyshopUUID));
|
||||
callback(usersInRoom);
|
||||
} catch (error) {
|
||||
createLogEvent(socket, "ERROR", `Error getting room: ${error}`);
|
||||
}
|
||||
});
|
||||
|
||||
socket.on("broadcast-to-bodyshop", async (bodyshopUUID, message) => {
|
||||
try {
|
||||
const room = getBodyshopRoom(bodyshopUUID);
|
||||
io.to(room).emit("bodyshop-message", message);
|
||||
createLogEvent(socket, "INFO", `Broadcast message to bodyshop ${room}`);
|
||||
} catch (error) {
|
||||
createLogEvent(socket, "ERROR", `Error getting room: ${error}`);
|
||||
}
|
||||
});
|
||||
|
||||
socket.on("disconnect", async () => {
|
||||
try {
|
||||
createLogEvent(socket, "DEBUG", `User disconnected.`);
|
||||
|
||||
// Get all rooms the socket is part of
|
||||
const rooms = Array.from(socket.rooms).filter((room) => room !== socket.id);
|
||||
for (const room of rooms) {
|
||||
await removeUserFromRoom(room, { uid: socket.user.uid, email: socket.user.email, socket: socket.id });
|
||||
}
|
||||
} catch (error) {
|
||||
createLogEvent(socket, "ERROR", `Error getting room: ${error}`);
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
// Register all socket events for a given socket connection
|
||||
function registerSocketEvents(socket) {
|
||||
createLogEvent(socket, "DEBUG", `Registering RedisIO Socket Events.`);
|
||||
|
||||
// Register room and broadcasting events
|
||||
registerRoomAndBroadcastEvents(socket);
|
||||
registerUpdateEvents(socket);
|
||||
}
|
||||
const redisSocketEvents = ({
|
||||
io,
|
||||
redisHelpers: { setSessionData, clearSessionData }, // Note: Used if we persist user to Redis
|
||||
ioHelpers: { getBodyshopRoom },
|
||||
logger
|
||||
}) => {
|
||||
// Logging helper functions
|
||||
const createLogEvent = (socket, level, message) => {
|
||||
console.log(`[IOREDIS LOG EVENT] - ${socket?.user?.email} - ${socket.id} - ${message}`);
|
||||
logger.log("ioredis-log-event", level, socket?.user?.email, null, { wsmessage: message });
|
||||
};
|
||||
|
||||
// Socket Auth Middleware
|
||||
const authMiddleware = (socket, next) => {
|
||||
try {
|
||||
if (socket.handshake.auth.token) {
|
||||
@@ -105,6 +21,9 @@ const redisSocketEvents = (io, { addUserToRoom, getUsersInRoom, removeUserFromRo
|
||||
.verifyIdToken(socket.handshake.auth.token)
|
||||
.then((user) => {
|
||||
socket.user = user;
|
||||
// Note: if we ever want to capture user data across sockets
|
||||
// Uncomment the following line and then remove the next() to a second then()
|
||||
// return setSessionData(socket.id, "user", user);
|
||||
next();
|
||||
})
|
||||
.catch((error) => {
|
||||
@@ -122,7 +41,99 @@ const redisSocketEvents = (io, { addUserToRoom, getUsersInRoom, removeUserFromRo
|
||||
}
|
||||
};
|
||||
|
||||
// Socket.IO Middleware and Connection
|
||||
// Register Socket Events
|
||||
const registerSocketEvents = (socket) => {
|
||||
createLogEvent(socket, "DEBUG", `Registering RedisIO Socket Events.`);
|
||||
|
||||
// Token Update Events
|
||||
const registerUpdateEvents = (socket) => {
|
||||
const updateToken = async (newToken) => {
|
||||
try {
|
||||
// noinspection UnnecessaryLocalVariableJS
|
||||
const user = await admin.auth().verifyIdToken(newToken, true);
|
||||
socket.user = user;
|
||||
|
||||
// If We ever want to persist user Data across workers
|
||||
// await setSessionData(socket.id, "user", user);
|
||||
|
||||
createLogEvent(socket, "INFO", "Token updated successfully");
|
||||
|
||||
socket.emit("token-updated", { success: true });
|
||||
} catch (error) {
|
||||
if (error.code === "auth/id-token-expired") {
|
||||
createLogEvent(socket, "WARNING", "Stale token received, waiting for new token");
|
||||
socket.emit("token-updated", {
|
||||
success: false,
|
||||
error: "Stale token."
|
||||
});
|
||||
} else {
|
||||
createLogEvent(socket, "ERROR", `Token update failed: ${error.message}`);
|
||||
socket.emit("token-updated", { success: false, error: error.message });
|
||||
// For any other errors, optionally disconnect the socket
|
||||
socket.disconnect();
|
||||
}
|
||||
}
|
||||
};
|
||||
socket.on("update-token", updateToken);
|
||||
};
|
||||
// Room Broadcast Events
|
||||
const registerRoomAndBroadcastEvents = (socket) => {
|
||||
const joinBodyshopRoom = (bodyshopUUID) => {
|
||||
try {
|
||||
const room = getBodyshopRoom(bodyshopUUID);
|
||||
socket.join(room);
|
||||
createLogEvent(socket, "DEBUG", `Client joined bodyshop room: ${room}`);
|
||||
} catch (error) {
|
||||
createLogEvent(socket, "ERROR", `Error joining room: ${error}`);
|
||||
}
|
||||
};
|
||||
|
||||
const leaveBodyshopRoom = (bodyshopUUID) => {
|
||||
try {
|
||||
const room = getBodyshopRoom(bodyshopUUID);
|
||||
socket.leave(room);
|
||||
createLogEvent(socket, "DEBUG", `Client left bodyshop room: ${room}`);
|
||||
} catch (error) {
|
||||
createLogEvent(socket, "ERROR", `Error joining room: ${error}`);
|
||||
}
|
||||
};
|
||||
|
||||
const broadcastToBodyshopRoom = (bodyshopUUID, message) => {
|
||||
try {
|
||||
const room = getBodyshopRoom(bodyshopUUID);
|
||||
io.to(room).emit("bodyshop-message", message);
|
||||
createLogEvent(socket, "DEBUG", `Broadcast message to bodyshop ${room}`);
|
||||
} catch (error) {
|
||||
createLogEvent(socket, "ERROR", `Error getting room: ${error}`);
|
||||
}
|
||||
};
|
||||
|
||||
socket.on("join-bodyshop-room", joinBodyshopRoom);
|
||||
socket.on("leave-bodyshop-room", leaveBodyshopRoom);
|
||||
socket.on("broadcast-to-bodyshop", broadcastToBodyshopRoom);
|
||||
};
|
||||
// Disconnect Events
|
||||
const registerDisconnectEvents = (socket) => {
|
||||
const disconnect = () => {
|
||||
createLogEvent(socket, "DEBUG", `User disconnected.`);
|
||||
const rooms = Array.from(socket.rooms).filter((room) => room !== socket.id);
|
||||
for (const room of rooms) {
|
||||
socket.leave(room);
|
||||
}
|
||||
// If we ever want to persist the user across workers
|
||||
// clearSessionData(socket.id);
|
||||
};
|
||||
|
||||
socket.on("disconnect", disconnect);
|
||||
};
|
||||
|
||||
// Call Handlers
|
||||
registerRoomAndBroadcastEvents(socket);
|
||||
registerUpdateEvents(socket);
|
||||
registerDisconnectEvents(socket);
|
||||
};
|
||||
|
||||
// Associate Middleware and Handlers
|
||||
io.use(authMiddleware);
|
||||
io.on("connection", registerSocketEvents);
|
||||
};
|
||||
|
||||
Reference in New Issue
Block a user