docker-redis - local tests

Signed-off-by: Dave Richer <dave@imexsystems.ca>
This commit is contained in:
Dave Richer
2024-10-02 00:27:11 -04:00
parent b7423aebf6
commit 04dec6d91c
14 changed files with 456 additions and 18 deletions

25
.dockerignore Normal file
View File

@@ -0,0 +1,25 @@
# Directories to exclude
.circleci
.idea
.platform
.vscode
_reference
client
redis/dockerdata
hasura
node_modules
# Files to exclude
.ebignore
.editorconfig
.eslintrc.json
.gitignore
.prettierrc.js
Dockerfile
README.MD
bodyshop_translations.babel
docker-compose.yml
ecosystem.config.js
# Optional: Exclude logs and temporary files
*.log

39
Dockerfile Normal file
View File

@@ -0,0 +1,39 @@
# Use Amazon Linux 2023 as the base image
FROM amazonlinux:2023
# Install Git and Node.js (Amazon Linux 2023 uses the DNF package manager)
RUN dnf install -y git \
&& curl -sL https://rpm.nodesource.com/setup_20.x | bash - \
&& dnf install -y nodejs \
&& dnf clean all
# Install dependencies required by node-canvas
RUN dnf install -y \
gcc \
gcc-c++ \
cairo-devel \
pango-devel \
libjpeg-turbo-devel \
giflib-devel \
libpng-devel \
make \
&& dnf clean all
# Set the working directory
WORKDIR /app
# Copy package.json and package-lock.json
COPY package*.json ./
# Install dependencies
RUN npm install --omit=dev
# Copy the rest of your application code
COPY . .
# Expose the port your app runs on (adjust if necessary)
EXPOSE 4000
# Start the application
CMD ["node", "server.js"]

134
_reference/dockerreadme.md Normal file
View File

@@ -0,0 +1,134 @@
# Setting up External Networking and Static IP for WSL2 using Hyper-V
This guide will walk you through the steps to configure your WSL2 (Windows Subsystem for Linux) instance to use an external Hyper-V virtual switch, enabling it to connect directly to your local network. Additionally, you'll learn how to assign a static IP address to your WSL2 instance.
## Prerequisites
1. **Windows 10/11** with **WSL2** installed.
2. **Hyper-V** enabled on your system. If not, follow these steps to enable it:
- Open PowerShell as Administrator and run:
```powershell
dism.exe /Online /Enable-Feature /All /FeatureName:Microsoft-Hyper-V
```
- Restart your computer.
3. A basic understanding of networking and WSL2 configuration.
---
## Step 1: Create an External Hyper-V Switch
1. **Open Hyper-V Manager**:
- Press `Windows Key + X`, select `Hyper-V Manager`.
2. **Create a Virtual Switch**:
- In the right-hand pane, click `Virtual Switch Manager`.
- Choose `External` and click `Create Virtual Switch`.
- Select your external network adapter (this is usually your Ethernet or Wi-Fi adapter).
- Give the switch a name (e.g., `WSL External Switch`), then click `Apply` and `OK`.
---
## Step 2: Configure WSL2 to Use the External Hyper-V Switch
Now that you've created the external virtual switch, follow these steps to configure your WSL2 instance to use this switch.
1. **Set WSL2 to Use the External Switch**:
- By default, WSL2 uses NAT to connect to your local network. You need to configure WSL2 to use the external Hyper-V switch instead.
2. **Check WSL2 Networking**:
- Inside WSL, run:
```bash
ip a
```
- You should see an IP address in the range of your local network (e.g., `192.168.x.x`).
---
## Step 3: Configure a Static IP Address for WSL2
Once WSL2 is connected to the external network, you can assign a static IP address to your WSL2 instance.
1. **Open WSL2** and Edit the Network Configuration:
- Depending on your Linux distribution, the file paths may vary, but typically for Ubuntu-based systems:
```bash
sudo nano /etc/netplan/01-netcfg.yaml
```
- If this file doesnt exist, create a new file or use the correct configuration file path.
2. **Configure Static IP**:
- Add or update the following configuration:
```yaml
network:
version: 2
renderer: networkd
ethernets:
eth0:
dhcp4: no
addresses:
- 192.168.1.100/24 # Choose an IP address in your network range
gateway4: 192.168.1.1 # Your router's IP address
nameservers:
addresses:
- 8.8.8.8
- 8.8.4.4
```
- Adjust the values according to your local network settings:
- `addresses`: This is the static IP you want to assign.
- `gateway4`: This should be the IP address of your router.
- `nameservers`: These are DNS servers, you can use Google's public DNS or any other DNS provider.
3. **Apply the Changes**:
- Run the following command to apply the network configuration:
```bash
sudo netplan apply
```
4. **Verify the Static IP**:
- Check if the static IP is correctly set by running:
```bash
ip a
```
- You should see the static IP you configured (e.g., `192.168.1.100`) on the appropriate network interface (usually `eth0`).
---
## Step 4: Restart WSL2 to Apply Changes
To ensure the changes are fully applied, restart WSL2:
1. Open PowerShell or Command Prompt and run:
```powershell
wsl --shutdown
2. Then, start your WSL2 instance again.
## Step 5: Verify Connectivity
1. Check Internet and Local Network Connectivity:
- Run a ping command from within WSL to verify that it can reach the internet: ```ping 8.8.8.8```
2. Test Access from other Devices:
- If you're running services inside WSL (e.g., a web server), ensure they are accessible from other devices on your local network using the static IP address you configured (e.g., `http://192.168.1.100:4000`).
## Step 6: Configuring `vm.overcommit_memory` in sysctl for WSL2
To prevent memory overcommitment issues and optimize performance, you can configure the `vm.overcommit_memory` setting in WSL2. This is particularly useful when running Redis or other memory-intensive services inside WSL2, as it helps control how the Linux kernel handles memory allocation.
### 1. **Open the sysctl Configuration File**:
To set the `vm.overcommit_memory` value, you'll need to edit the sysctl configuration file. Inside your WSL2 instance, run the following command to open the `sysctl.conf` file for editing:
```bash
sudo nano /etc/sysctl.conf
```
### 2. Add the Overcommit Memory Setting:
Add the following line at the end of the file to allow memory overcommitment:
```bash
vm.overcommit_memory = 1
```
This setting tells the Linux kernel to always allow memory allocation, regardless of how much memory is available, which can prevent out-of-memory errors when running certain applications.
### 3. Apply the Changes:
After editing the file, save it and then apply the new sysctl configuration by running:
```bash
sudo sysctl -p
```

69
docker-compose.yml Normal file
View File

@@ -0,0 +1,69 @@
version: '3.9'
services:
redis-node-1:
build:
context: ./redis
container_name: redis-node-1
hostname: redis-node-1
networks:
- redis-cluster-net
volumes:
- ./redis/dockerdata/redis-node-1:/data
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 5s
timeout: 3s
retries: 5
redis-node-2:
build:
context: ./redis
container_name: redis-node-2
hostname: redis-node-2
networks:
- redis-cluster-net
volumes:
- ./redis/dockerdata/redis-node-2:/data
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 5s
timeout: 3s
retries: 5
redis-node-3:
build:
context: ./redis
container_name: redis-node-3
hostname: redis-node-3
networks:
- redis-cluster-net
volumes:
- ./redis/dockerdata/redis-node-3:/data
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 5s
timeout: 3s
retries: 5
node-app:
build:
context: .
container_name: node-app
networks:
- redis-cluster-net
env_file:
- .env.development
depends_on:
redis-node-1:
condition: service_healthy
redis-node-2:
condition: service_healthy
redis-node-3:
condition: service_healthy
ports:
- "4000:4000"
networks:
redis-cluster-net:
driver: bridge

79
package-lock.json generated
View File

@@ -36,6 +36,7 @@
"graylog2": "^0.2.1",
"inline-css": "^4.0.2",
"intuit-oauth": "^4.1.2",
"ioredis": "^5.4.1",
"json-2-csv": "^5.5.5",
"lodash": "^4.17.21",
"moment": "^2.30.1",
@@ -1397,6 +1398,12 @@
"node": ">=6"
}
},
"node_modules/@ioredis/commands": {
"version": "1.2.0",
"resolved": "https://registry.npmjs.org/@ioredis/commands/-/commands-1.2.0.tgz",
"integrity": "sha512-Sx1pU8EM64o2BrqNpEO1CNLtKQwyhuXuqyfH7oGKCk+1a33d2r5saW8zNwm3j6BTExtjrv2BxTgzzkMwts6vGg==",
"license": "MIT"
},
"node_modules/@isaacs/cliui": {
"version": "8.0.2",
"resolved": "https://registry.npmjs.org/@isaacs/cliui/-/cliui-8.0.2.tgz",
@@ -3701,6 +3708,15 @@
"resolved": "https://registry.npmjs.org/delegates/-/delegates-1.0.0.tgz",
"integrity": "sha512-bd2L678uiWATM6m5Z1VzNCErI3jiGzt6HGY8OVICs40JQq/HALfbyNJmp0UDakEY4pMMaN0Ly5om/B1VI/+xfQ=="
},
"node_modules/denque": {
"version": "2.1.0",
"resolved": "https://registry.npmjs.org/denque/-/denque-2.1.0.tgz",
"integrity": "sha512-HVQE3AAb/pxF8fQAoiqpvg9i3evqug3hoiwakOyZAwJm+6vZehbkYXZ0l4JxS+I3QxM97v5aaRNhj8v5oBhekw==",
"license": "Apache-2.0",
"engines": {
"node": ">=0.10"
}
},
"node_modules/depd": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/depd/-/depd-2.0.0.tgz",
@@ -5008,6 +5024,30 @@
"node": ">=10"
}
},
"node_modules/ioredis": {
"version": "5.4.1",
"resolved": "https://registry.npmjs.org/ioredis/-/ioredis-5.4.1.tgz",
"integrity": "sha512-2YZsvl7jopIa1gaePkeMtd9rAcSjOOjPtpcLlOeusyO+XH2SK5ZcT+UCrElPP+WVIInh2TzeI4XW9ENaSLVVHA==",
"license": "MIT",
"dependencies": {
"@ioredis/commands": "^1.1.1",
"cluster-key-slot": "^1.1.0",
"debug": "^4.3.4",
"denque": "^2.1.0",
"lodash.defaults": "^4.2.0",
"lodash.isarguments": "^3.1.0",
"redis-errors": "^1.2.0",
"redis-parser": "^3.0.0",
"standard-as-callback": "^2.1.0"
},
"engines": {
"node": ">=12.22.0"
},
"funding": {
"type": "opencollective",
"url": "https://opencollective.com/ioredis"
}
},
"node_modules/ip": {
"version": "1.1.8",
"resolved": "https://registry.npmjs.org/ip/-/ip-1.1.8.tgz",
@@ -5357,11 +5397,23 @@
"resolved": "https://registry.npmjs.org/lodash.clonedeep/-/lodash.clonedeep-4.5.0.tgz",
"integrity": "sha512-H5ZhCF25riFd9uB5UCkVKo61m3S/xZk1x4wA6yp/L3RFP6Z/eHH1ymQcGLo7J3GMPfm0V/7m1tryHuGVxpqEBQ=="
},
"node_modules/lodash.defaults": {
"version": "4.2.0",
"resolved": "https://registry.npmjs.org/lodash.defaults/-/lodash.defaults-4.2.0.tgz",
"integrity": "sha512-qjxPLHd3r5DnsdGacqOMU6pb/avJzdh9tFX2ymgoZE27BmjXrNy/y4LoaiTeAb+O3gL8AfpJGtqfX/ae2leYYQ==",
"license": "MIT"
},
"node_modules/lodash.includes": {
"version": "4.3.0",
"resolved": "https://registry.npmjs.org/lodash.includes/-/lodash.includes-4.3.0.tgz",
"integrity": "sha512-W3Bx6mdkRTGtlJISOvVD/lbqjTlPPUDTMnlXZFnVwi9NKJ6tiAk6LVdlhZMm17VZisqhKcgzpO5Wz91PCt5b0w=="
},
"node_modules/lodash.isarguments": {
"version": "3.1.0",
"resolved": "https://registry.npmjs.org/lodash.isarguments/-/lodash.isarguments-3.1.0.tgz",
"integrity": "sha512-chi4NHZlZqZD18a0imDHnZPrDeBbTtVN7GXMwuGdRH9qotxAjYs3aVLKc7zNOG9eddR5Ksd8rvFEBc9SsggPpg==",
"license": "MIT"
},
"node_modules/lodash.isboolean": {
"version": "3.0.3",
"resolved": "https://registry.npmjs.org/lodash.isboolean/-/lodash.isboolean-3.0.3.tgz",
@@ -6284,6 +6336,27 @@
"@redis/time-series": "1.1.0"
}
},
"node_modules/redis-errors": {
"version": "1.2.0",
"resolved": "https://registry.npmjs.org/redis-errors/-/redis-errors-1.2.0.tgz",
"integrity": "sha512-1qny3OExCf0UvUV/5wpYKf2YwPcOqXzkwKKSmKHiE6ZMQs5heeE/c8eXK+PNllPvmjgAbfnsbpkGZWy8cBpn9w==",
"license": "MIT",
"engines": {
"node": ">=4"
}
},
"node_modules/redis-parser": {
"version": "3.0.0",
"resolved": "https://registry.npmjs.org/redis-parser/-/redis-parser-3.0.0.tgz",
"integrity": "sha512-DJnGAeenTdpMEH6uAJRK/uiyEIH9WVsUmoLwzudwGJUwZPp80PDBWPHXSAGNPwNvIXAbe7MSUB1zQFugFml66A==",
"license": "MIT",
"dependencies": {
"redis-errors": "^1.0.0"
},
"engines": {
"node": ">=4"
}
},
"node_modules/regenerator-runtime": {
"version": "0.14.0",
"resolved": "https://registry.npmjs.org/regenerator-runtime/-/regenerator-runtime-0.14.0.tgz",
@@ -6934,6 +7007,12 @@
"node": "*"
}
},
"node_modules/standard-as-callback": {
"version": "2.1.0",
"resolved": "https://registry.npmjs.org/standard-as-callback/-/standard-as-callback-2.1.0.tgz",
"integrity": "sha512-qoRRSyROncaz1z0mvYqIE4lCd9p2R90i6GxW3uZv5ucSu8tU7B5HXUP1gG8pVZsYNVaXjk8ClXHPttLyxAL48A==",
"license": "MIT"
},
"node_modules/statuses": {
"version": "2.0.1",
"resolved": "https://registry.npmjs.org/statuses/-/statuses-2.0.1.tgz",

View File

@@ -46,6 +46,7 @@
"graylog2": "^0.2.1",
"inline-css": "^4.0.2",
"intuit-oauth": "^4.1.2",
"ioredis": "^5.4.1",
"json-2-csv": "^5.5.5",
"lodash": "^4.17.21",
"moment": "^2.30.1",

2
redis/.gitignore vendored Normal file
View File

@@ -0,0 +1,2 @@
redis-cluster-init.lock
dockerdata/

20
redis/Dockerfile Normal file
View File

@@ -0,0 +1,20 @@
# Use the official Redis image
FROM redis:7.0-alpine
# Copy the Redis configuration file
COPY redis.conf /usr/local/etc/redis/redis.conf
# Copy the entrypoint script
COPY entrypoint.sh /usr/local/bin/entrypoint.sh
# Make the entrypoint script executable
RUN chmod +x /usr/local/bin/entrypoint.sh
# Debugging step: List contents of /usr/local/bin
RUN ls -l /usr/local/bin
# Expose Redis ports
EXPOSE 6379 16379
# Set the entrypoint
ENTRYPOINT ["entrypoint.sh"]

3
redis/dockerdata/.gitignore vendored Normal file
View File

@@ -0,0 +1,3 @@
.gitkeep
!.gitignore
!.gitkeep

View File

30
redis/entrypoint.sh Normal file
View File

@@ -0,0 +1,30 @@
#!/bin/sh
LOCK_FILE="redis-cluster-init.lock"
# Start Redis server in the background
redis-server /usr/local/etc/redis/redis.conf &
# Wait for Redis server to start
sleep 5
# Initialize the cluster only if the lock file does not exist
if [ ! -f "$LOCK_FILE" ]; then
echo "Initializing Redis Cluster..."
# Run the Redis cluster initialization
yes yes | redis-cli --cluster create \
redis-node-1:6379 \
redis-node-2:6379 \
redis-node-3:6379 \
--cluster-replicas 0
# Create the lock file after initialization
touch "$LOCK_FILE"
echo "Cluster initialization complete. Lock file created."
else
echo "Cluster has already been initialized. Skipping initialization."
fi
# Keep the container running
tail -f /dev/null

6
redis/redis.conf Normal file
View File

@@ -0,0 +1,6 @@
bind 0.0.0.0
port 6379
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
appendonly yes

View File

@@ -6,7 +6,8 @@ const compression = require("compression");
const cookieParser = require("cookie-parser");
const http = require("http");
const { Server } = require("socket.io");
const { createClient } = require("redis");
// const { createClient } = require("redis");
const Redis = require("ioredis");
const { createAdapter } = require("@socket.io/redis-adapter");
const logger = require("./server/utils/logger");
const { redisSocketEvents } = require("./server/web-sockets/redisSocketEvents");
@@ -108,19 +109,48 @@ const applyRoutes = (app) => {
* @param app
*/
const applySocketIO = async (server, app) => {
// Redis client setup for Pub/Sub and Key-Value Store
const pubClient = createClient({ url: process.env.REDIS_URL || "redis://localhost:6379" });
const redisCluster = new Redis.Cluster(
process.env.REDIS_URL
? JSON.parse(process.env.REDIS_URL)
: [
{
host: "redis://localhost:6379"
}
],
{
clusterRetryStrategy: function (times) {
const delay = Math.min(100 + times * 50, 2000);
logger.log(
`[${process.env.NODE_ENV}] Redis cluster not yet ready. Retrying in ${delay}ms`,
"ERROR",
"redis",
"api"
);
return delay;
}
}
);
// Handle errors
redisCluster.on("error", (err) => {
logger.log(`[${process.env.NODE_ENV}] Redis ERROR`, "ERROR", "redis", "api");
});
const pubClient = redisCluster;
const subClient = pubClient.duplicate();
// https://github.com/redis/node-redis/blob/master/docs/clustering.md
// https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/accessing-elasticache.html
pubClient.on("error", (err) => logger.log(`Redis pubClient error: ${err}`, "ERROR", "redis"));
subClient.on("error", (err) => logger.log(`Redis subClient error: ${err}`, "ERROR", "redis"));
try {
await Promise.all([pubClient.connect(), subClient.connect()]);
logger.log(`[${process.env.NODE_ENV}] Connected to Redis`, "INFO", "redis", "api");
} catch (redisError) {
logger.log("Failed to connect to Redis", "ERROR", "redis", redisError);
}
// try {
// // await Promise.all([pubClient.connect(), subClient.connect()]);
// // logger.log(`[${process.env.NODE_ENV}] Connected to Redis`, "INFO", "redis", "api");
// // } catch (redisError) {
// // logger.log("Failed to connect to Redis", "ERROR", "redis", redisError);
// // }
process.on("SIGINT", async () => {
logger.log("Closing Redis connections...", "INFO", "redis", "api");

View File

@@ -8,7 +8,7 @@ const applyRedisHelpers = (pubClient, app, logger) => {
// Store session data in Redis
const setSessionData = async (socketId, key, value) => {
try {
await pubClient.hSet(`socket:${socketId}`, key, JSON.stringify(value)); // Use Redis pubClient
await pubClient.hset(`socket:${socketId}`, key, JSON.stringify(value)); // Use Redis pubClient
} catch (error) {
logger.log(`Error Setting Session Data for socket ${socketId}: ${error}`, "ERROR", "redis");
}
@@ -17,7 +17,7 @@ const applyRedisHelpers = (pubClient, app, logger) => {
// Retrieve session data from Redis
const getSessionData = async (socketId, key) => {
try {
const data = await pubClient.hGet(`socket:${socketId}`, key);
const data = await pubClient.hget(`socket:${socketId}`, key);
return data ? JSON.parse(data) : null;
} catch (error) {
logger.log(`Error Getting Session Data for socket ${socketId}: ${error}`, "ERROR", "redis");
@@ -38,7 +38,7 @@ const applyRedisHelpers = (pubClient, app, logger) => {
try {
// keyValues is expected to be an object { key1: value1, key2: value2, ... }
const entries = Object.entries(keyValues).map(([key, value]) => [key, JSON.stringify(value)]);
await pubClient.hSet(`socket:${socketId}`, ...entries.flat());
await pubClient.hset(`socket:${socketId}`, ...entries.flat());
} catch (error) {
logger.log(`Error Setting Multiple Session Data for socket ${socketId}: ${error}`, "ERROR", "redis");
}
@@ -47,7 +47,7 @@ const applyRedisHelpers = (pubClient, app, logger) => {
// Retrieve multiple session data from Redis
const getMultipleSessionData = async (socketId, keys) => {
try {
const data = await pubClient.hmGet(`socket:${socketId}`, keys);
const data = await pubClient.hmget(`socket:${socketId}`, keys);
// Redis returns an object with null values for missing keys, so we parse the non-null ones
return Object.fromEntries(keys.map((key, index) => [key, data[index] ? JSON.parse(data[index]) : null]));
} catch (error) {
@@ -71,7 +71,7 @@ const applyRedisHelpers = (pubClient, app, logger) => {
// Helper function to add an item to the end of the Redis list
const addItemToEndOfList = async (socketId, key, newItem) => {
try {
await pubClient.rPush(`socket:${socketId}:${key}`, JSON.stringify(newItem));
await pubClient.rpush(`socket:${socketId}:${key}`, JSON.stringify(newItem));
} catch (error) {
logger.log(`Error adding item to the end of the list for socket ${socketId}: ${error}`, "ERROR", "redis");
}
@@ -80,7 +80,7 @@ const applyRedisHelpers = (pubClient, app, logger) => {
// Helper function to add an item to the beginning of the Redis list
const addItemToBeginningOfList = async (socketId, key, newItem) => {
try {
await pubClient.lPush(`socket:${socketId}:${key}`, JSON.stringify(newItem));
await pubClient.lpush(`socket:${socketId}:${key}`, JSON.stringify(newItem));
} catch (error) {
logger.log(`Error adding item to the beginning of the list for socket ${socketId}: ${error}`, "ERROR", "redis");
}
@@ -98,7 +98,7 @@ const applyRedisHelpers = (pubClient, app, logger) => {
// Add methods to manage room users
const addUserToRoom = async (room, user) => {
try {
await pubClient.sAdd(room, JSON.stringify(user));
await pubClient.sadd(room, JSON.stringify(user));
} catch (error) {
logger.log(`Error adding user to room ${room}: ${error}`, "ERROR", "redis");
}
@@ -106,7 +106,7 @@ const applyRedisHelpers = (pubClient, app, logger) => {
const removeUserFromRoom = async (room, user) => {
try {
await pubClient.sRem(room, JSON.stringify(user));
await pubClient.srem(room, JSON.stringify(user));
} catch (error) {
logger.log(`Error removing user to room ${room}: ${error}`, "ERROR", "redis");
}
@@ -114,7 +114,7 @@ const applyRedisHelpers = (pubClient, app, logger) => {
const getUsersInRoom = async (room) => {
try {
const users = await pubClient.sMembers(room);
const users = await pubClient.smembers(room);
return users.map((user) => JSON.parse(user));
} catch (error) {
logger.log(`Error getting users in room ${room}: ${error}`, "ERROR", "redis");