Compare commits

...

30 Commits

Author SHA1 Message Date
Stéphan Sainléger
516b9def3d [IMP] add external command verification at startup
Add check_required_commands() function to verify that all required
external tools are available before the script begins execution:
- docker: Container runtime
- compose: Docker compose wrapper (0k-scripts)
- sudo: Required for filestore operations

Benefits:
- Fails fast with a clear error message listing missing commands
- Prevents cryptic 'command not found' errors mid-execution
- Documents script dependencies explicitly
- Called immediately after argument validation in upgrade.sh
2026-02-02 20:06:27 +01:00
Stéphan Sainléger
20214b4402 [IMP] apply naming conventions for variables
Apply consistent naming conventions throughout upgrade.sh:
- UPPERCASE + readonly for script-level constants (immutable values)
- lowercase for temporary/local variables within the script flow

Constants marked readonly:
- ORIGIN_VERSION, FINAL_VERSION, ORIGIN_DB_NAME, ORIGIN_SERVICE_NAME
- COPY_DB_NAME, FINALE_DB_NAME, FINALE_SERVICE_NAME
- POSTGRES_SERVICE_NAME

Local variables renamed to lowercase:
- postgres_containers, postgres_count (detection phase)
- db_exists, filestore_path (validation phase)

This convention makes it immediately clear which variables are
configuration constants vs runtime values, and prevents accidental
modification of critical values.
2026-02-02 20:06:27 +01:00
Stéphan Sainléger
fa8b1b98f1 [IMP] use mktemp and trap for temporary file cleanup
Replace hardcoded temporary file paths with mktemp -d for secure
temporary directory creation, and add a trap to automatically clean
up on script exit (success, failure, or interruption).

Benefits:
- Automatic cleanup even on Ctrl+C or script errors
- No leftover temporary files in the working directory
- Secure temporary directory creation (proper permissions)
- Files isolated in dedicated temp directory

Added '|| true' to grep command since it returns exit code 1 when
no matches are found, which would trigger set -e otherwise.
2026-02-02 20:06:27 +01:00
Stéphan Sainléger
7c3d5d49c8 [IMP] use heredoc with variable expansion for SQL query
Convert the SQL_404_ADDONS_LIST query from a quoted string to a heredoc
without quotes (<<EOF instead of <<'EOF') to make variable expansion
explicit and consistent with other SQL blocks in the codebase.

Key difference between heredoc variants:
- <<'EOF': Literal content, no variable expansion (use for static SQL)
- <<EOF: Variables like ${FINALE_DB_NAME} are expanded (use when needed)

Also improved SQL formatting for better readability.
2026-02-02 20:06:27 +01:00
Stéphan Sainléger
333360f9f0 [IMP] add structured logging functions
Add logging functions to lib/common.sh for consistent output formatting:
- log_info(): Standard informational messages with [INFO] prefix
- log_warn(): Warning messages to stderr with [WARN] prefix
- log_error(): Error messages to stderr with [ERROR] prefix
- log_step(): Section headers with visual separators

Update upgrade.sh to use these functions throughout, replacing ad-hoc
echo statements. This provides:
- Consistent visual formatting across all scripts
- Clear distinction between info, warnings and errors
- Errors properly sent to stderr
- Easier log parsing and filtering

Also removed redundant '|| exit 1' statements since set -e handles
command failures automatically.
2026-02-02 20:06:27 +01:00
Stéphan Sainléger
4b27955be9 [IMP] centralize common functions in lib/common.sh
Extract shared utility functions into a dedicated library file:
- query_postgres_container: Execute SQL queries in postgres container
- copy_database: Copy database using pgm
- copy_filestore: Copy Odoo filestore directory
- exec_python_script_in_odoo_shell: Run Python scripts in Odoo shell

Benefits:
- Single source of truth for utility functions
- Easier maintenance and testing
- Consistent behavior across all scripts
- Reduced code duplication

Also introduces readonly constants DATASTORE_PATH and FILESTORE_SUBPATH
to avoid hardcoded paths scattered throughout the codebase.
2026-02-02 20:06:27 +01:00
Stéphan Sainléger
a91afa60d3 [IMP] fix undefined variable DB_CONTAINER_NAME
Replace $DB_CONTAINER_NAME with $POSTGRES_SERVICE_NAME which is the
correct variable exported from the parent script (upgrade.sh).

DB_CONTAINER_NAME was never defined, causing the script to fail
immediately with 'set -u' enabled (unbound variable error). The
intended variable is POSTGRES_SERVICE_NAME which contains the name
of the PostgreSQL container detected at runtime.
2026-02-02 20:06:27 +01:00
Stéphan Sainléger
a1508daf24 [IMP] fix return statement outside function
Replace 'return 1' with 'exit 1' in prepare_db.sh.

The 'return' statement is only valid inside functions. When used at
the script's top level, it behaves unpredictably - in some shells it
exits the script, in others it's an error. Using 'exit 1' explicitly
terminates the script with an error status, which is the intended
behavior when the PostgreSQL container is not running.
2026-02-02 20:06:27 +01:00
Stéphan Sainléger
3fe2e93d3d [IMP] use [[ instead of [ for conditionals
Replace single bracket [ ] with double bracket [[ ]] for all test
conditionals in the main scripts.

Benefits of [[ over [:
- No need to quote variables (though we still do for consistency)
- Supports regex matching with =~
- Supports pattern matching with == and !=
- && and || work inside [[ ]] without escaping
- More predictable behavior with empty strings
- Is a bash keyword, not an external command

Note: posbox scripts are left unchanged as they appear to be
third-party code imported into the repository.
2026-02-02 20:06:27 +01:00
Stéphan Sainléger
526b27fdec [IMP] fix variable quoting issues
Properly quote all variable expansions to prevent word splitting and
glob expansion issues:
- Quote $POSTGRES_SERVICE_NAME in docker exec command
- Quote $REPERTOIRE in directory test
- Remove unnecessary $ inside arithmetic expressions (($VAR -> VAR))

Unquoted variables can cause unexpected behavior when values contain
spaces or special characters. In arithmetic contexts, $ is unnecessary
and can mask errors with set -u.
2026-02-02 20:06:27 +01:00
Stéphan Sainléger
266842585b [IMP] add argument validation with usage message
Add proper argument validation at the start of upgrade.sh:
- Check that exactly 4 arguments are provided
- Display a helpful usage message with argument descriptions
- Include a concrete example command

This prevents cryptic errors when the script is called incorrectly
and provides clear guidance on expected parameters. With set -u enabled,
accessing unset positional parameters would cause an unclear error message.
2026-02-02 20:06:27 +01:00
Stéphan Sainléger
30909a3b28 [IMP] add strict mode (set -euo pipefail) to all scripts
Enable bash strict mode in all shell scripts to catch errors early:
- set -e: Exit immediately if a command exits with non-zero status
- set -u: Treat unset variables as an error
- set -o pipefail: Return value of a pipeline is the status of the last
  command to exit with non-zero status

This prevents silent failures and makes debugging easier by failing fast
when something goes wrong instead of continuing with potentially corrupted
state.
2026-02-02 20:06:27 +01:00
Stéphan Sainléger
4d7933cef0 [IMP] add final script to purge QWeb cache from compiled assets 2026-02-02 17:24:11 +01:00
Stéphan Sainléger
4a3d3b238f [IMP] add final script to reset all custom website templates 2026-02-02 17:23:24 +01:00
Stéphan Sainléger
59fc39620d [IMP] improves the way postgres container is detected 2026-01-16 14:19:51 +01:00
Stéphan Sainléger
7d001ff163 [IMP] adds pythons script to clean obsolete addons 2026-01-13 15:08:11 +01:00
Stéphan Sainléger
da59dffcfa [IMP] adds .gitignore 2026-01-13 12:50:16 +01:00
Stéphan Sainléger
d8b332762b [IMP] set debug log level on compose run commands 2026-01-13 12:44:35 +01:00
Stéphan Sainléger
023deeea5b [IMP] adds duplicated views cleaning at finalize db step 2026-01-13 12:39:17 +01:00
Stéphan Sainléger
743d1ce831 [IMP] adds check view python scripts at db preparation step 2026-01-13 12:38:47 +01:00
Stéphan Sainléger
469fb42e48 [IMP] add function to execute python scripts in Odoo shell 2026-01-13 12:37:38 +01:00
Stéphan Sainléger
f18d50cb94 [CLN] remove useless force_uninstall_addons file 2026-01-12 17:09:38 +01:00
Stéphan Sainléger
afeaa3d00f [IMP] fix the issue of account_analytic_plan migration in v17
Should be great to understand the origin of the problem one day...
2026-01-12 17:08:43 +01:00
Stéphan Sainléger
93fc10395f [IMP] add base in --load option of upgrade compose run command
due to account.invoice transformation in account.move
2026-01-12 17:00:03 +01:00
Stéphan Sainléger
458af6a795 [IMP] use odoo image rc/16.0-ELABORE-LIGHT instead of rc/16.0-MYC-INIT 2025-09-11 11:03:33 +02:00
Stéphan Sainléger
61733b04a3 [NEW] add migration scripts for Odoo 18.0 2025-09-11 11:03:29 +02:00
Stéphan Sainléger
385b9bc751 [NEW] add migration scripts for Odoo 17.0 2025-09-11 10:39:37 +02:00
Stéphan Sainléger
f432b4c75e [IMP] add commented command to launch global addons update at each step of the migration 2025-09-11 10:35:13 +02:00
Stéphan Sainléger
21028149be [IMP] update postgres version to 17.2.0 2025-09-11 10:33:39 +02:00
Stéphan Sainléger
972e6c7b26 [FIX] add su access for filestore manipulation 2025-09-11 10:31:52 +02:00
28 changed files with 883 additions and 218 deletions

1
.gitignore vendored Normal file
View File

@@ -0,0 +1 @@
final_404_addons

View File

@@ -0,0 +1,6 @@
#!/bin/bash
set -euo pipefail
echo "Post migration to 13.0..."
#compose --debug run ou13 -u base --stop-after-init --no-http

View File

@@ -1,4 +1,5 @@
#!/bin/bash
set -euo pipefail
echo "Prepare migration to 13.0..."

View File

@@ -1,3 +1,4 @@
#!/bin/bash
set -euo pipefail
compose -f ../compose.yml run -p 8013:8069 ou13 --config=/opt/odoo/auto/odoo.conf --stop-after-init -u all --workers 0 --log-level=warn --max-cron-threads=0 --limit-time-real=10000 --database=ou13
compose -f ../compose.yml run -p 8013:8069 ou13 --config=/opt/odoo/auto/odoo.conf --stop-after-init -u all --workers 0 --log-level=debug --max-cron-threads=0 --limit-time-real=10000 --database=ou13

View File

@@ -0,0 +1,6 @@
#!/bin/bash
set -euo pipefail
echo "Post migration to 14.0..."
#compose --debug run ou14 -u base --stop-after-init --no-http

View File

@@ -1,4 +1,5 @@
#!/bin/bash
set -euo pipefail
echo "Prepare migration to 14.0..."

View File

@@ -1,3 +1,4 @@
#!/bin/bash
set -euo pipefail
compose -f ../compose.yml run -p 8014:8069 ou14 --config=/opt/odoo/auto/odoo.conf --stop-after-init -u all --workers 0 --log-level=warn --max-cron-threads=0 --limit-time-real=10000 --database=ou14 --load=web,openupgrade_framework
compose -f ../compose.yml run -p 8014:8069 ou14 --config=/opt/odoo/auto/odoo.conf --stop-after-init -u all --workers 0 --log-level=debug --max-cron-threads=0 --limit-time-real=10000 --database=ou14 --load=base,web,openupgrade_framework

View File

@@ -0,0 +1,6 @@
#!/bin/bash
set -euo pipefail
echo "Post migration to 15.0..."
#compose --debug run ou15 -u base --stop-after-init --no-http

View File

@@ -1,4 +1,5 @@
#!/bin/bash
set -euo pipefail
echo "Prepare migration to 15.0..."

View File

@@ -1,3 +1,4 @@
#!/bin/bash
set -euo pipefail
compose -f ../compose.yml run -p 8015:8069 ou15 --config=/opt/odoo/auto/odoo.conf --stop-after-init -u all --workers 0 --log-level=warn --max-cron-threads=0 --limit-time-real=10000 --database=ou15 --load=web,openupgrade_framework
compose -f ../compose.yml run -p 8015:8069 ou15 --config=/opt/odoo/auto/odoo.conf --stop-after-init -u all --workers 0 --log-level=debug --max-cron-threads=0 --limit-time-real=10000 --database=ou15 --load=base,web,openupgrade_framework

View File

@@ -0,0 +1,6 @@
#!/bin/bash
set -euo pipefail
echo "Post migration to 16.0..."
#compose --debug run ou16 -u base --stop-after-init --no-http

View File

@@ -1,4 +1,5 @@
#!/bin/bash
set -euo pipefail
echo "Prepare migration to 16.0..."

View File

@@ -1,3 +1,4 @@
#!/bin/bash
set -euo pipefail
compose -f ../compose.yml run -p 8016:8069 ou16 --config=/opt/odoo/auto/odoo.conf --stop-after-init -u all --workers 0 --log-level=warn --max-cron-threads=0 --limit-time-real=10000 --database=ou16 --load=web,openupgrade_framework
compose -f ../compose.yml run -p 8016:8069 ou16 --config=/opt/odoo/auto/odoo.conf --stop-after-init -u all --workers 0 --log-level=debug --max-cron-threads=0 --limit-time-real=10000 --database=ou16 --load=base,web,openupgrade_framework

32
17.0/post_upgrade.sh Executable file
View File

@@ -0,0 +1,32 @@
#!/bin/bash
set -euo pipefail
echo "Post migration to 17.0..."
# Execute SQL post-migration commands
POST_MIGRATE_SQL=$(cat <<'EOF'
DO $$
DECLARE
plan_id INTEGER;
BEGIN
-- Check if the 'Projects' analytic plan exists
SELECT id INTO plan_id FROM account_analytic_plan WHERE complete_name = 'migration_PROJECTS' LIMIT 1;
-- If it does exist, delete it
IF plan_id IS NOT NULL THEN
DELETE FROM account_analytic_plan WHERE complete_name = 'migration_PROJECTS';
SELECT id INTO plan_id FROM account_analytic_plan WHERE complete_name = 'Projects' LIMIT 1;
-- Delete existing system parameter (if any)
DELETE FROM ir_config_parameter WHERE key = 'analytic.project_plan';
-- Insert the system parameter with the correct plan ID
INSERT INTO ir_config_parameter (key, value, create_date, write_date)
VALUES ('analytic.project_plan', plan_id::text, now(), now());
END IF;
END $$;
EOF
)
echo "SQL command = $POST_MIGRATE_SQL"
query_postgres_container "$POST_MIGRATE_SQL" ou17 || exit 1
#compose --debug run ou17 -u base --stop-after-init --no-http

57
17.0/pre_upgrade.sh Executable file
View File

@@ -0,0 +1,57 @@
#!/bin/bash
set -euo pipefail
echo "Prepare migration to 17.0..."
# Copy database
copy_database ou16 ou17 ou17 || exit 1
# Execute SQL pre-migration commands
PRE_MIGRATE_SQL=$(cat <<'EOF'
DO $$
DECLARE
plan_id INTEGER;
BEGIN
-- Check if the 'Projects' analytic plan exists
SELECT id INTO plan_id FROM account_analytic_plan WHERE name = 'Projects' LIMIT 1;
-- If it doesn't exist, create it
IF plan_id IS NULL THEN
INSERT INTO account_analytic_plan (name, complete_name, default_applicability, create_date, write_date)
VALUES ('Projects', 'migration_PROJECTS', 'optional', now(), now())
RETURNING id INTO plan_id;
END IF;
-- Delete existing system parameter (if any)
DELETE FROM ir_config_parameter WHERE key = 'analytic.project_plan';
-- Insert the system parameter with the correct plan ID
INSERT INTO ir_config_parameter (key, value, create_date, write_date)
VALUES ('analytic.project_plan', plan_id::text, now(), now());
END $$;
EOF
)
echo "SQL command = $PRE_MIGRATE_SQL"
query_postgres_container "$PRE_MIGRATE_SQL" ou17 || exit 1
PRE_MIGRATE_SQL_2=$(cat <<'EOF'
DELETE FROM ir_model_fields WHERE name = 'kanban_state_label';
EOF
)
echo "SQL command = $PRE_MIGRATE_SQL_2"
query_postgres_container "$PRE_MIGRATE_SQL_2" ou17 || exit 1
PRE_MIGRATE_SQL_3=$(cat <<'EOF'
DELETE FROM ir_model_fields WHERE name = 'phone' AND model='hr.employee';
DELETE FROM ir_model_fields WHERE name = 'hr_responsible_id' AND model='hr.job';
DELETE FROM ir_model_fields WHERE name = 'address_home_id' AND model='hr.employee';
DELETE FROM ir_model_fields WHERE name = 'manager_id' AND model='project.task';
EOF
)
echo "SQL command = $PRE_MIGRATE_SQL_3"
query_postgres_container "$PRE_MIGRATE_SQL_3" ou17 || exit 1
# Copy filestores
copy_filestore ou16 ou16 ou17 ou17 || exit 1
echo "Ready for migration to 17.0!"

4
17.0/upgrade.sh Executable file
View File

@@ -0,0 +1,4 @@
#!/bin/bash
set -euo pipefail
compose -f ../compose.yml run -p 8017:8069 ou17 --config=/opt/odoo/auto/odoo.conf --stop-after-init -u all --workers 0 --log-level=debug --max-cron-threads=0 --limit-time-real=10000 --database=ou17 --load=base,web,openupgrade_framework

6
18.0/post_upgrade.sh Executable file
View File

@@ -0,0 +1,6 @@
#!/bin/bash
set -euo pipefail
echo "Post migration to 18.0..."
#compose --debug run ou18 -u base --stop-after-init --no-http

20
18.0/pre_upgrade.sh Executable file
View File

@@ -0,0 +1,20 @@
#!/bin/bash
set -euo pipefail
echo "Prepare migration to 18.0..."
# Copy database
copy_database ou17 ou18 ou18 || exit 1
# Execute SQL pre-migration commands
PRE_MIGRATE_SQL=$(cat <<'EOF'
UPDATE account_analytic_plan SET default_applicability=NULL WHERE default_applicability='optional';
EOF
)
echo "SQL command = $PRE_MIGRATE_SQL"
query_postgres_container "$PRE_MIGRATE_SQL" ou18 || exit 1
# Copy filestores
copy_filestore ou17 ou17 ou18 ou18 || exit 1
echo "Ready for migration to 18.0!"

4
18.0/upgrade.sh Executable file
View File

@@ -0,0 +1,4 @@
#!/bin/bash
set -euo pipefail
compose -f ../compose.yml run -p 8018:8069 ou18 --config=/opt/odoo/auto/odoo.conf --stop-after-init -u all --workers 0 --log-level=debug --max-cron-threads=0 --limit-time-real=10000 --database=ou18 --load=base,web,openupgrade_framework

View File

@@ -52,7 +52,7 @@ ou15:
ou16:
charm: odoo-tecnativa
docker-compose:
image: docker.0k.io/mirror/odoo:rc_16.0-MYC-INIT
image: docker.0k.io/mirror/odoo:rc_16.0-ELABORE-LIGHT
## Important to keep as a list: otherwise it'll overwrite charm's arguments.
command:
- "--log-level=debug"
@@ -73,6 +73,18 @@ ou17:
options:
workers: 0
ou18:
charm: odoo-tecnativa
docker-compose:
image: docker.0k.io/mirror/odoo:rc_18.0-ELABORE-LIGHT
## Important to keep as a list: otherwise it'll overwrite charm's arguments.
command:
- "--log-level=debug"
- "--limit-time-cpu=1000000"
- "--limit-time-real=1000000"
options:
workers: 0
postgres:
docker-compose:
image: docker.0k.io/postgres:12.15.0-myc
image: docker.0k.io/postgres:17.2.0-myc

View File

@@ -1,4 +1,5 @@
#!/bin/bash
set -euo pipefail
DB_NAME="$1"
ODOO_SERVICE="$2"
@@ -11,10 +12,41 @@ EOF
)
query_postgres_container "$FINALE_SQL" "$DB_NAME" || exit 1
# Fix duplicated views
PYTHON_SCRIPT=post_migration_fix_duplicated_views.py
echo "Remove duplicated views with script $PYTHON_SCRIPT ..."
exec_python_script_in_odoo_shell "$DB_NAME" "$DB_NAME" "$PYTHON_SCRIPT" || exit 1
# Reset all website templates with custom content
FINALE_SQL_2=$(cat <<'EOF'
UPDATE ir_ui_view
SET arch_db = NULL
WHERE arch_fs IS NOT NULL
AND arch_fs LIKE 'website/%'
AND arch_db IS NOT NULL
AND id NOT IN (SELECT view_id FROM website_page);
EOF
)
query_postgres_container "$FINALE_SQL_2" "$DB_NAME" || exit 1
# Purge QWeb cache from compiled assets
FINALE_SQL_3=$(cat <<'EOF'
DELETE FROM ir_attachment
WHERE name LIKE '/web/assets/%'
OR name LIKE '%.assets_%'
OR (res_model = 'ir.ui.view' AND mimetype = 'text/css');
EOF
)
query_postgres_container "$FINALE_SQL_3" "$DB_NAME" || exit 1
# Uninstall obsolette add-ons
PYTHON_SCRIPT=post_migration_cleanup_obsolete_modules.py
echo "Uninstall obsolete add-ons with script $PYTHON_SCRIPT ..."
exec_python_script_in_odoo_shell "$DB_NAME" "$DB_NAME" "$PYTHON_SCRIPT" || exit 1
# Give back the right to user to access to the tables
# docker exec -u 70 "$DB_CONTAINER_NAME" pgm chown "$FINALE_SERVICE_NAME" "$DB_NAME"
# Launch Odoo with database in finale version to run all updates
compose --debug run "$ODOO_SERVICE" -u all --stop-after-init --no-http
compose --debug run "$ODOO_SERVICE" -u all --log-level=debug --stop-after-init --no-http

View File

@@ -1,24 +0,0 @@
galicea_base
galicea_environment_checkup
mass_editing
mass_mailing_themes
muk_autovacuum
muk_fields_lobject
muk_fields_stream
muk_utils
muk_web_theme_mail
muk_web_utils
account_usability
kpi_dashboard
web_window_title
website_project_kanbanview
project_usability
project_tag
maintenance_server_monitoring_ping
maintenance_server_monitoring_ssh
maintenance_server_monitoring_memory
maintenance_server_monitoring_maintenance_equipment_status
maintenance_server_monitoring_disk
project_task_assignees_avatar
account_partner_reconcile
account_invoice_import_simple_pdf

82
lib/common.sh Normal file
View File

@@ -0,0 +1,82 @@
#!/bin/bash
#
# Common functions for Odoo migration scripts
# Source this file from other scripts: source "$(dirname "$0")/lib/common.sh"
#
set -euo pipefail
readonly DATASTORE_PATH="/srv/datastore/data"
readonly FILESTORE_SUBPATH="var/lib/odoo/filestore"
check_required_commands() {
local missing=()
for cmd in docker compose sudo; do
if ! command -v "$cmd" &>/dev/null; then
missing+=("$cmd")
fi
done
if [[ ${#missing[@]} -gt 0 ]]; then
log_error "Required commands not found: ${missing[*]}"
log_error "Please install them before running this script."
exit 1
fi
}
log_info() { printf "[INFO] %s\n" "$*"; }
log_warn() { printf "[WARN] %s\n" "$*" >&2; }
log_error() { printf "[ERROR] %s\n" "$*" >&2; }
log_step() { printf "\n===== %s =====\n" "$*"; }
query_postgres_container() {
local query="$1"
local db_name="$2"
if [[ -z "$query" ]]; then
return 0
fi
local result
if ! result=$(docker exec -u 70 "$POSTGRES_SERVICE_NAME" psql -d "$db_name" -t -A -c "$query"); then
printf "Failed to execute SQL query: %s\n" "$query" >&2
printf "Error: %s\n" "$result" >&2
return 1
fi
echo "$result"
}
copy_database() {
local from_db="$1"
local to_service="$2"
local to_db="$3"
docker exec -u 70 "$POSTGRES_SERVICE_NAME" pgm cp -f "$from_db" "${to_db}@${to_service}"
}
copy_filestore() {
local from_service="$1"
local from_db="$2"
local to_service="$3"
local to_db="$4"
local src_path="${DATASTORE_PATH}/${from_service}/${FILESTORE_SUBPATH}/${from_db}"
local dst_path="${DATASTORE_PATH}/${to_service}/${FILESTORE_SUBPATH}/${to_db}"
sudo mkdir -p "$dst_path"
sudo rm -rf "$dst_path"
sudo cp -a "$src_path" "$dst_path"
echo "Filestore ${from_service}/${from_db} copied to ${to_service}/${to_db}."
}
exec_python_script_in_odoo_shell() {
local service_name="$1"
local db_name="$2"
local python_script="$3"
compose --debug run "$service_name" shell -d "$db_name" --no-http --stop-after-init < "$python_script"
}
export DATASTORE_PATH FILESTORE_SUBPATH
export -f log_info log_warn log_error log_step
export -f check_required_commands
export -f query_postgres_container copy_database copy_filestore exec_python_script_in_odoo_shell

View File

@@ -0,0 +1,128 @@
#!/usr/bin/env python3
"""
Post-Migration Obsolete Module Cleanup
Run this AFTER migration to detect and remove modules that exist in the database
but no longer exist in the filesystem (addons paths).
"""
print("\n" + "="*80)
print("POST-MIGRATION OBSOLETE MODULE CLEANUP")
print("="*80 + "\n")
import odoo.modules.module as module_lib
# Get all modules from database
all_modules = env['ir.module.module'].search([])
print(f"Analyzing {len(all_modules)} modules in database...\n")
# Detect obsolete modules (in database but not in filesystem)
obsolete_modules = []
for mod in all_modules:
mod_path = module_lib.get_module_path(mod.name, display_warning=False)
if not mod_path:
obsolete_modules.append(mod)
if not obsolete_modules:
print("✓ No obsolete modules found! Database is clean.")
print("=" * 80 + "\n")
exit()
# Separate modules by state
safe_to_delete = [m for m in obsolete_modules if m.state != 'installed']
installed_obsolete = [m for m in obsolete_modules if m.state == 'installed']
# Display obsolete modules
print(f"Obsolete modules found: {len(obsolete_modules)}\n")
if installed_obsolete:
print("-" * 80)
print("⚠️ OBSOLETE INSTALLED MODULES (require attention)")
print("-" * 80)
for mod in sorted(installed_obsolete, key=lambda m: m.name):
print(f"{mod.name:40} | ID: {mod.id}")
print()
if safe_to_delete:
print("-" * 80)
print("OBSOLETE UNINSTALLED MODULES (safe to delete)")
print("-" * 80)
for mod in sorted(safe_to_delete, key=lambda m: m.name):
print(f"{mod.name:40} | State: {mod.state:15} | ID: {mod.id}")
print()
# Summary
print("=" * 80)
print("SUMMARY")
print("=" * 80 + "\n")
print(f" • Obsolete uninstalled modules (safe to delete): {len(safe_to_delete)}")
print(f" • Obsolete INSTALLED modules (caution!): {len(installed_obsolete)}")
# Delete uninstalled modules
if safe_to_delete:
print("\n" + "=" * 80)
print("DELETING OBSOLETE UNINSTALLED MODULES")
print("=" * 80 + "\n")
deleted_count = 0
failed_deletes = []
for mod in safe_to_delete:
try:
mod_name = mod.name
mod_id = mod.id
mod.unlink()
print(f"✓ Deleted: {mod_name} (ID: {mod_id})")
deleted_count += 1
except Exception as e:
print(f"✗ Failed: {mod.name} - {e}")
failed_deletes.append({'name': mod.name, 'id': mod.id, 'reason': str(e)})
# Commit changes
print("\n" + "=" * 80)
print("COMMITTING CHANGES")
print("=" * 80 + "\n")
try:
env.cr.commit()
print("✓ All changes committed successfully!")
except Exception as e:
print(f"✗ Commit failed: {e}")
print("Changes were NOT saved!")
exit(1)
# Final result
print("\n" + "=" * 80)
print("RESULT")
print("=" * 80 + "\n")
print(f" • Successfully deleted modules: {deleted_count}")
print(f" • Failed deletions: {len(failed_deletes)}")
if failed_deletes:
print("\n⚠️ Modules not deleted:")
for item in failed_deletes:
print(f"{item['name']} (ID: {item['id']}): {item['reason']}")
if installed_obsolete:
print("\n" + "=" * 80)
print("⚠️ WARNING: OBSOLETE INSTALLED MODULES")
print("=" * 80 + "\n")
print("The following modules are marked 'installed' but no longer exist")
print("in the filesystem. They may cause problems.\n")
print("Options:")
print(" 1. Check if these modules were renamed/merged in the new version")
print(" 2. Manually uninstall them if possible")
print(" 3. Force delete them (risky, may break dependencies)\n")
for mod in sorted(installed_obsolete, key=lambda m: m.name):
# Find modules that depend on this module
dependents = env['ir.module.module'].search([
('state', '=', 'installed'),
('dependencies_id.name', '=', mod.name)
])
dep_info = f" <- Dependents: {dependents.mapped('name')}" if dependents else ""
print(f"{mod.name}{dep_info}")
print("\n" + "=" * 80)
print("CLEANUP COMPLETED!")
print("=" * 80 + "\n")

View File

@@ -0,0 +1,192 @@
#!/usr/bin/env python3
"""
Post-Migration Duplicate View Fixer
Run this AFTER migration to fix duplicate views automatically.
"""
print("\n" + "="*80)
print("POST-MIGRATION DUPLICATE VIEW FIXER")
print("="*80 + "\n")
from collections import defaultdict
# Find all duplicate views
all_views = env['ir.ui.view'].search(['|', ('active', '=', True), ('active', '=', False)])
keys = defaultdict(list)
for view in all_views:
if view.key:
keys[view.key].append(view)
duplicates = {k: v for k, v in keys.items() if len(v) > 1}
print(f"Found {len(duplicates)} keys with duplicate views\n")
if not duplicates:
print("✓ No duplicate views found! Database is clean.")
print("=" * 80 + "\n")
exit()
# Process duplicates
views_to_delete = []
redirect_log = []
for key, views in sorted(duplicates.items()):
print(f"\nProcessing key: {key}")
print("-" * 80)
# Sort views: module views first, then by ID (older first)
sorted_views = sorted(views, key=lambda v: (
0 if v.model_data_id else 1, # Module views first
v.id # Older views first (lower ID = older)
))
# Keep the first view (should be module view or oldest)
keep = sorted_views[0]
to_delete = sorted_views[1:]
module_keep = keep.model_data_id.module if keep.model_data_id else "Custom/DB"
print(f"KEEP: ID {keep.id:>6} | Module: {module_keep:<20} | {keep.name}")
for view in to_delete:
module = view.model_data_id.module if view.model_data_id else "Custom/DB"
print(f"DELETE: ID {view.id:>6} | Module: {module:<20} | {view.name}")
# Find and redirect children
children = env['ir.ui.view'].search([('inherit_id', '=', view.id)])
if children:
print(f" → Redirecting {len(children)} children {children.ids} to view {keep.id}")
for child in children:
child_module = child.model_data_id.module if child.model_data_id else "Custom/DB"
redirect_log.append({
'child_id': child.id,
'child_name': child.name,
'child_module': child_module,
'from': view.id,
'to': keep.id
})
try:
children.write({'inherit_id': keep.id})
print(f" ✓ Redirected successfully")
except Exception as e:
print(f" ✗ Redirect failed: {e}")
continue
views_to_delete.append(view)
# Summary before deletion
print("\n" + "="*80)
print("SUMMARY")
print("="*80 + "\n")
print(f"Views to delete: {len(views_to_delete)}")
print(f"Child views to redirect: {len(redirect_log)}\n")
if redirect_log:
print("Redirections that will be performed:")
for item in redirect_log[:10]: # Show first 10
print(f" • View {item['child_id']} ({item['child_module']})")
print(f" '{item['child_name']}'")
print(f" Parent: {item['from']}{item['to']}")
if len(redirect_log) > 10:
print(f" ... and {len(redirect_log) - 10} more redirections")
# Delete duplicate views
print("\n" + "="*80)
print("DELETING DUPLICATE VIEWS")
print("="*80 + "\n")
deleted_count = 0
failed_deletes = []
# Sort views by ID descending (delete newer/child views first)
views_to_delete_sorted = sorted(views_to_delete, key=lambda v: v.id, reverse=True)
for view in views_to_delete_sorted:
try:
# Create savepoint to isolate each deletion
env.cr.execute('SAVEPOINT delete_view')
view_id = view.id
view_name = view.name
view_key = view.key
# Double-check it has no children
remaining_children = env['ir.ui.view'].search([('inherit_id', '=', view_id)])
if remaining_children:
print(f"⚠️ Skipping view {view_id}: Still has {len(remaining_children)} children")
failed_deletes.append({
'id': view_id,
'reason': f'Still has {len(remaining_children)} children'
})
env.cr.execute('ROLLBACK TO SAVEPOINT delete_view')
continue
view.unlink()
env.cr.execute('RELEASE SAVEPOINT delete_view')
print(f"✓ Deleted view {view_id}: {view_key}")
deleted_count += 1
except Exception as e:
env.cr.execute('ROLLBACK TO SAVEPOINT delete_view')
print(f"✗ Failed to delete view {view.id}: {e}")
failed_deletes.append({
'id': view.id,
'name': view.name,
'reason': str(e)
})
# Commit changes
print("\n" + "="*80)
print("COMMITTING CHANGES")
print("="*80 + "\n")
try:
env.cr.commit()
print("✓ All changes committed successfully!")
except Exception as e:
print(f"✗ Commit failed: {e}")
print("Changes were NOT saved!")
exit(1)
# Final verification
print("\n" + "="*80)
print("FINAL VERIFICATION")
print("="*80 + "\n")
# Re-check for duplicates
all_views_after = env['ir.ui.view'].search([('active', '=', True)])
keys_after = defaultdict(list)
for view in all_views_after:
if view.key:
keys_after[view.key].append(view)
duplicates_after = {k: v for k, v in keys_after.items() if len(v) > 1}
print(f"Results:")
print(f" • Successfully deleted: {deleted_count} views")
print(f" • Failed deletions: {len(failed_deletes)}")
print(f" • Child views redirected: {len(redirect_log)}")
print(f" • Remaining duplicates: {len(duplicates_after)}")
if failed_deletes:
print(f"\n⚠️ Failed deletions:")
for item in failed_deletes:
print(f" • View {item['id']}: {item['reason']}")
if duplicates_after:
print(f"\n⚠️ Still have {len(duplicates_after)} duplicate keys:")
for key, views in sorted(duplicates_after.items())[:5]:
print(f"{key}: {len(views)} views")
for view in views:
module = view.model_data_id.module if view.model_data_id else "Custom/DB"
print(f" - ID {view.id} ({module})")
print(f"\n Run this script again to attempt another cleanup.")
else:
print(f"\n✓ All duplicates resolved!")
print("\n" + "="*80)
print("FIX COMPLETED!")
print("="*80)

View File

@@ -0,0 +1,126 @@
#!/usr/bin/env python3
"""
Pre-Migration Cleanup Script for Odoo
Run this BEFORE migrating to identify and clean up custom views.
Usage: odoo shell -d dbname < pre_migration_cleanup.py
"""
print("\n" + "="*80)
print("PRE-MIGRATION CLEANUP - VIEW ANALYSIS")
print("="*80 + "\n")
# 1. Find all custom (COW) views
print("STEP 1: Identifying Custom/COW Views")
print("-"*80)
all_views = env['ir.ui.view'].search(['|', ('active', '=', True), ('active', '=', False)])
cow_views = all_views.filtered(lambda v: not v.model_data_id)
print(f"Total views in database: {len(all_views)}")
print(f"Custom views (no module): {len(cow_views)}")
print(f"Module views: {len(all_views) - len(cow_views)}\n")
if cow_views:
print("Custom views found:\n")
print(f"{'ID':<8} {'Active':<8} {'Key':<50} {'Name':<40}")
print("-"*120)
for view in cow_views[:50]: # Show first 50
active_str = "" if view.active else ""
key_str = view.key[:48] if view.key else "N/A"
name_str = view.name[:38] if view.name else "N/A"
print(f"{view.id:<8} {active_str:<8} {key_str:<50} {name_str:<40}")
if len(cow_views) > 50:
print(f"\n... and {len(cow_views) - 50} more custom views")
# 2. Find duplicate views
print("\n" + "="*80)
print("STEP 2: Finding Duplicate Views (Same Key)")
print("-"*80 + "\n")
from collections import defaultdict
keys = defaultdict(list)
for view in all_views.filtered(lambda v: v.key and v.active):
keys[view.key].append(view)
duplicates = {k: v for k, v in keys.items() if len(v) > 1}
print(f"Found {len(duplicates)} keys with duplicate views:\n")
if duplicates:
for key, views in sorted(duplicates.items()):
print(f"\nKey: {key} ({len(views)} duplicates)")
for view in views:
module = view.model_data_id.module if view.model_data_id else "⚠️ Custom/DB"
print(f" ID {view.id:>6}: {module:<25} | {view.name}")
# 3. Find views that might have xpath issues
print("\n" + "="*80)
print("STEP 3: Finding Views with XPath Expressions")
print("-"*80 + "\n")
import re
views_with_xpath = []
xpath_pattern = r'<xpath[^>]+expr="([^"]+)"'
for view in all_views.filtered(lambda v: v.active and v.inherit_id):
xpaths = re.findall(xpath_pattern, view.arch_db)
if xpaths:
views_with_xpath.append({
'view': view,
'xpaths': xpaths,
'is_custom': not bool(view.model_data_id)
})
print(f"Found {len(views_with_xpath)} views with xpath expressions")
custom_xpath_views = [v for v in views_with_xpath if v['is_custom']]
print(f" - {len(custom_xpath_views)} are custom views (potential issue!)")
print(f" - {len(views_with_xpath) - len(custom_xpath_views)} are module views\n")
if custom_xpath_views:
print("Custom views with xpaths (risk for migration issues):\n")
for item in custom_xpath_views:
view = item['view']
print(f"ID {view.id}: {view.name}")
print(f" Key: {view.key}")
print(f" Inherits from: {view.inherit_id.key}")
print(f" XPath count: {len(item['xpaths'])}")
print(f" Sample xpaths: {item['xpaths'][:2]}")
print()
# 4. Summary and recommendations
print("=" * 80)
print("SUMMARY AND RECOMMENDATIONS")
print("=" * 80 + "\n")
print(f"📊 Statistics:")
print(f" • Total views: {len(all_views)}")
print(f" • Custom views: {len(cow_views)}")
print(f" • Duplicate view keys: {len(duplicates)}")
print(f" • Custom views with xpaths: {len(custom_xpath_views)}\n")
print(f"\n📋 RECOMMENDED ACTIONS BEFORE MIGRATION:\n")
if custom_xpath_views:
print(f"1. Archive or delete {len(custom_xpath_views)} custom views with xpaths:")
print(f" • Review each one and determine if still needed")
print(f" • Archive unnecessary ones: env['ir.ui.view'].browse([ids]).write({{'active': False}})")
print(f" • Plan to recreate important ones as proper module views after migration\n")
if duplicates:
print(f"2. Fix {len(duplicates)} duplicate view keys:")
print(f" • Manually review and delete obsolete duplicates, keeping the most appropriate one")
print(f" • Document the remaining appropriate ones as script post_migration_fix_duplicated_views.py will run AFTER the migration and delete all duplicates.\n")
if cow_views:
print(f"3. Review {len(cow_views)} custom views:")
print(f" • Document which ones are important")
print(f" • Export their XML for reference")
print(f" • Consider converting to module views\n")
print("=" * 80 + "\n")

View File

@@ -1,21 +1,24 @@
#!/bin/bash
set -euo pipefail
# Global variables
ODOO_SERVICE="$1"
DB_NAME="$2"
DB_FINALE_MODEL="$3"
DB_FINALE_SERVICE="$4"
TMPDIR=$(mktemp -d)
trap 'rm -rf "$TMPDIR"' EXIT
echo "Start database preparation"
# Check POSTGRES container is running
if ! docker ps | grep -q "$DB_CONTAINER_NAME"; then
printf "Docker container %s is not running.\n" "$DB_CONTAINER_NAME" >&2
return 1
if ! docker ps | grep -q "$POSTGRES_SERVICE_NAME"; then
printf "Docker container %s is not running.\n" "$POSTGRES_SERVICE_NAME" >&2
exit 1
fi
EXT_EXISTS=$(query_postgres_container "SELECT 1 FROM pg_extension WHERE extname = 'dblink'" "$DB_NAME") || exit 1
if [ "$EXT_EXISTS" != "1" ]; then
if [[ "$EXT_EXISTS" != "1" ]]; then
query_postgres_container "CREATE EXTENSION dblink;" "$DB_NAME" || exit 1
fi
@@ -39,37 +42,34 @@ echo "Base neutralized..."
## List add-ons not in final version ##
#######################################
# Retrieve add-ons not available on the final Odoo version
SQL_404_ADDONS_LIST="
SELECT module_origin.name
FROM ir_module_module module_origin
LEFT JOIN (
SELECT *
FROM dblink('dbname=$FINALE_DB_NAME','SELECT name, shortdesc, author FROM ir_module_module')
AS tb2(name text, shortdesc text, author text)
) AS module_dest ON module_dest.name = module_origin.name
WHERE (module_dest.name IS NULL) AND (module_origin.state = 'installed') AND (module_origin.author NOT IN ('Odoo S.A.', 'Lokavaluto', 'Elabore'))
ORDER BY module_origin.name
;
"
SQL_404_ADDONS_LIST=$(cat <<EOF
SELECT module_origin.name
FROM ir_module_module module_origin
LEFT JOIN (
SELECT *
FROM dblink('dbname=${FINALE_DB_NAME}','SELECT name, shortdesc, author FROM ir_module_module')
AS tb2(name text, shortdesc text, author text)
) AS module_dest ON module_dest.name = module_origin.name
WHERE (module_dest.name IS NULL)
AND (module_origin.state = 'installed')
AND (module_origin.author NOT IN ('Odoo S.A.', 'Lokavaluto', 'Elabore'))
ORDER BY module_origin.name;
EOF
)
echo "Retrieve 404 addons... "
echo "SQL REQUEST = $SQL_404_ADDONS_LIST"
query_postgres_container "$SQL_404_ADDONS_LIST" "$DB_NAME" > 404_addons || exit 1
query_postgres_container "$SQL_404_ADDONS_LIST" "$DB_NAME" > "${TMPDIR}/404_addons"
# Keep only the installed add-ons
INSTALLED_ADDONS="SELECT name FROM ir_module_module WHERE state='installed';"
query_postgres_container "$INSTALLED_ADDONS" "$DB_NAME" > installed_addons || exit 1
query_postgres_container "$INSTALLED_ADDONS" "$DB_NAME" > "${TMPDIR}/installed_addons"
grep -Fx -f 404_addons installed_addons > final_404_addons
rm -f 404_addons installed_addons
grep -Fx -f "${TMPDIR}/404_addons" "${TMPDIR}/installed_addons" > "${TMPDIR}/final_404_addons" || true
# Ask confirmation to uninstall the selected add-ons
echo "
==== ADD-ONS CHECK ====
Installed add-ons not available in final Odoo version:
"
cat final_404_addons
cat "${TMPDIR}/final_404_addons"
echo "
@@ -77,6 +77,26 @@ Do you accept to migrate the database with all these add-ons still installed? (Y
echo "Y - Yes, let's go on with the upgrade."
echo "N - No, stop the upgrade"
read -n 1 -p "Your choice: " choice
case "$choice" in
[Yy] ) echo "
Let's go on!";;
[Nn] ) echo "
Upgrade cancelled!"; exit 1;;
* ) echo "
Please answer by Y or N.";;
esac
# Check the views
PYTHON_SCRIPT=pre_migration_view_checking.py
echo "Check views with script $PYTHON_SCRIPT ..."
exec_python_script_in_odoo_shell "$DB_NAME" "$DB_NAME" "$PYTHON_SCRIPT" || exit 1
echo "
Do you accept to migrate the database with the current views states? (Y/N/R)"
echo "Y - Yes, let's go on with the upgrade."
echo "N - No, stop the upgrade"
read -n 1 -p "Your choice: " choice
case "$choice" in
[Yy] ) echo "
Upgrade confirmed!";;

View File

@@ -1,207 +1,148 @@
#!/bin/bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "${SCRIPT_DIR}/lib/common.sh"
####################
# GLOBAL VARIABLES #
# USAGE & ARGUMENTS
####################
ORIGIN_VERSION="$1" # "12" for version 12.0
FINAL_VERSION="$2" # "16" for version 16.0
# Path to the database to migrate. Must be a .zip file with the following syntax: {DATABASE_NAME}.zip
ORIGIN_DB_NAME="$3"
ORIGIN_SERVICE_NAME="$4"
usage() {
cat <<EOF >&2
Usage: $0 <origin_version> <final_version> <db_name> <service_name>
# Get origin database name
COPY_DB_NAME="ou${ORIGIN_VERSION}"
# Define finale database name
Arguments:
origin_version Origin Odoo version number (e.g., 12 for version 12.0)
final_version Target Odoo version number (e.g., 16 for version 16.0)
db_name Name of the database to migrate
service_name Name of the origin Odoo service (docker compose service)
Example:
$0 14 16 elabore_20241208 odoo14
EOF
exit 1
}
if [[ $# -lt 4 ]]; then
log_error "Missing arguments. Expected 4, got $#."
usage
fi
check_required_commands
readonly ORIGIN_VERSION="$1"
readonly FINAL_VERSION="$2"
readonly ORIGIN_DB_NAME="$3"
readonly ORIGIN_SERVICE_NAME="$4"
readonly COPY_DB_NAME="ou${ORIGIN_VERSION}"
export FINALE_DB_NAME="ou${FINAL_VERSION}"
# Define finale odoo service name
FINALE_SERVICE_NAME="${FINALE_DB_NAME}"
readonly FINALE_DB_NAME
readonly FINALE_SERVICE_NAME="${FINALE_DB_NAME}"
# Service postgres name
export POSTGRES_SERVICE_NAME="lokavaluto_postgres_1"
postgres_containers=$(docker ps --format '{{.Names}}' | grep postgres || true)
postgres_count=$(echo "$postgres_containers" | grep -c . || echo 0)
#############################################
# DISPLAYS ALL INPUTS PARAMETERS
#############################################
echo "===== INPUT PARAMETERS ====="
echo "Origin version .......... $ORIGIN_VERSION"
echo "Final version ........... $FINAL_VERSION"
echo "Origin DB name ........... $ORIGIN_DB_NAME"
echo "Origin service name ..... $ORIGIN_SERVICE_NAME"
echo "
===== COMPUTED GLOBALE VARIABLES ====="
echo "Copy DB name ............. $COPY_DB_NAME"
echo "Finale DB name ........... $FINALE_DB_NAME"
echo "Finale service name ...... $FINALE_SERVICE_NAME"
echo "Postgres service name .... $POSTGRES_SERVICE_NAME"
# Function to launch an SQL request to the postgres container
query_postgres_container(){
local QUERY="$1"
local DB_NAME="$2"
if [ -z "$QUERY" ]; then
return 0
fi
local result
if ! result=$(docker exec -u 70 "$POSTGRES_SERVICE_NAME" psql -d "$DB_NAME" -t -A -c "$QUERY"); then
printf "Failed to execute SQL query: %s\n" "$query" >&2
printf "Error: %s\n" "$result" >&2
exit 1
fi
echo "$result"
}
export -f query_postgres_container
# Function to copy the postgres databases
copy_database(){
local FROM_DB="$1"
local TO_SERVICE="$2"
local TO_DB="$3"
docker exec -u 70 "$POSTGRES_SERVICE_NAME" pgm cp -f "$FROM_DB" "$TO_DB"@"$TO_SERVICE"
}
export -f copy_database
# Function to copy the filetores
copy_filestore(){
local FROM_SERVICE="$1"
local FROM_DB="$2"
local TO_SERVICE="$3"
local TO_DB="$4"
mkdir -p /srv/datastore/data/"$TO_SERVICE"/var/lib/odoo/filestore/"$TO_DB" || exit 1
rm -rf /srv/datastore/data/"$TO_SERVICE"/var/lib/odoo/filestore/"$TO_DB" || exit 1
cp -a /srv/datastore/data/"$FROM_SERVICE"/var/lib/odoo/filestore/"$FROM_DB" /srv/datastore/data/"$TO_SERVICE"/var/lib/odoo/filestore/"$TO_DB" || exit 1
echo "Filestore $FROM_SERVICE/$FROM_DB copied."
}
export -f copy_filestore
##############################################
# CHECKS ALL NEEDED COMPONENTS ARE AVAILABLE #
##############################################
echo "
==== CHECKS ALL NEEDED COMPONENTS ARE AVAILABLE ===="
# Check POSTGRES container is running
if ! docker ps | grep -q "$POSTGRES_SERVICE_NAME"; then
printf "Docker container %s is not running.\n" "$POSTGRES_SERVICE_NAME" >&2
return 1
else
echo "UPGRADE: container $POSTGRES_SERVICE_NAME running."
fi
# Check origin database is in the local postgres
DB_EXISTS=$(docker exec -it -u 70 $POSTGRES_SERVICE_NAME psql -tc "SELECT 1 FROM pg_database WHERE datname = '$ORIGIN_DB_NAME'" | tr -d '[:space:]')
if [ "$DB_EXISTS" ]; then
echo "UPGRADE: Database '$ORIGIN_DB_NAME' found."
else
echo "ERROR: Database '$ORIGIN_DB_NAME' not found in the local postgress service. Please add it and restart the upgrade process."
if [[ "$postgres_count" -eq 0 ]]; then
log_error "No running PostgreSQL container found. Please start a PostgreSQL container and try again."
exit 1
elif [[ "$postgres_count" -gt 1 ]]; then
log_error "Multiple PostgreSQL containers found:"
echo "$postgres_containers" >&2
log_error "Please ensure only one PostgreSQL container is running."
exit 1
fi
# Check that the origin filestore exist
REPERTOIRE="/srv/datastore/data/${ORIGIN_SERVICE_NAME}/var/lib/odoo/filestore/${ORIGIN_DB_NAME}"
if [ -d $REPERTOIRE ]; then
echo "UPGRADE: '$REPERTOIRE' filestore found."
export POSTGRES_SERVICE_NAME="$postgres_containers"
readonly POSTGRES_SERVICE_NAME
log_step "INPUT PARAMETERS"
log_info "Origin version .......... $ORIGIN_VERSION"
log_info "Final version ........... $FINAL_VERSION"
log_info "Origin DB name ........... $ORIGIN_DB_NAME"
log_info "Origin service name ..... $ORIGIN_SERVICE_NAME"
log_step "COMPUTED GLOBAL VARIABLES"
log_info "Copy DB name ............. $COPY_DB_NAME"
log_info "Finale DB name ........... $FINALE_DB_NAME"
log_info "Finale service name ...... $FINALE_SERVICE_NAME"
log_info "Postgres service name .... $POSTGRES_SERVICE_NAME"
log_step "CHECKS ALL NEEDED COMPONENTS ARE AVAILABLE"
db_exists=$(docker exec -it -u 70 "$POSTGRES_SERVICE_NAME" psql -tc "SELECT 1 FROM pg_database WHERE datname = '$ORIGIN_DB_NAME'" | tr -d '[:space:]')
if [[ "$db_exists" ]]; then
log_info "Database '$ORIGIN_DB_NAME' found."
else
echo "ERROR: '$REPERTOIRE' filestore not found, please add it and restart the upgrade process."
log_error "Database '$ORIGIN_DB_NAME' not found in the local postgres service. Please add it and restart the upgrade process."
exit 1
fi
#######################################
# LAUNCH VIRGIN ODOO IN FINAL VERSION #
#######################################
filestore_path="${DATASTORE_PATH}/${ORIGIN_SERVICE_NAME}/${FILESTORE_SUBPATH}/${ORIGIN_DB_NAME}"
if [[ -d "$filestore_path" ]]; then
log_info "Filestore '$filestore_path' found."
else
log_error "Filestore '$filestore_path' not found, please add it and restart the upgrade process."
exit 1
fi
log_step "LAUNCH VIRGIN ODOO IN FINAL VERSION"
# Remove finale database and datastore if already exists (we need a virgin Odoo)
if docker exec -u 70 "$POSTGRES_SERVICE_NAME" pgm ls | grep -q "$FINALE_SERVICE_NAME"; then
log_info "Removing existing finale database and filestore..."
docker exec -u 70 "$POSTGRES_SERVICE_NAME" pgm rm -f "$FINALE_SERVICE_NAME"
rm -rf /srv/datastore/data/"$FINALE_SERVICE_NAME"/var/lib/odoo/filestore/"$FINALE_SERVICE_NAME"
sudo rm -rf "${DATASTORE_PATH}/${FINALE_SERVICE_NAME}/${FILESTORE_SUBPATH}/${FINALE_SERVICE_NAME}"
fi
compose --debug run "$FINALE_SERVICE_NAME" -i base --stop-after-init --no-http
echo "Model database in final Odoo version created."
log_info "Model database in final Odoo version created."
############################
# COPY ORIGINAL COMPONENTS #
############################
log_step "COPY ORIGINAL COMPONENTS"
echo "
==== COPY ORIGINAL COMPONENTS ===="
echo "UPGRADE: Start copy"
copy_database "$ORIGIN_DB_NAME" "$COPY_DB_NAME" "$COPY_DB_NAME"
log_info "Original database copied to ${COPY_DB_NAME}@${COPY_DB_NAME}."
# Copy database
copy_database "$ORIGIN_DB_NAME" "$COPY_DB_NAME" "$COPY_DB_NAME" || exit 1
echo "UPGRADE: original database copied in ${COPY_DB_NAME}@${COPY_DB_NAME}."
# Copy filestore
copy_filestore "$ORIGIN_SERVICE_NAME" "$ORIGIN_DB_NAME" "$COPY_DB_NAME" "$COPY_DB_NAME" || exit 1
echo "UPGRADE: original filestore copied."
copy_filestore "$ORIGIN_SERVICE_NAME" "$ORIGIN_DB_NAME" "$COPY_DB_NAME" "$COPY_DB_NAME"
log_info "Original filestore copied."
#####################
# PATH OF MIGRATION #
####################
log_step "PATH OF MIGRATION"
echo "
==== PATH OF MIGRATION ===="
# List all the versions to migrate through
declare -a versions
nb_migrations=$(($FINAL_VERSION - $ORIGIN_VERSION))
nb_migrations=$((FINAL_VERSION - ORIGIN_VERSION))
# Build the migration path
for ((i = 0; i<$nb_migrations; i++))
do
versions[$i]=$(($ORIGIN_VERSION + 1 + i))
for ((i = 0; i < nb_migrations; i++)); do
versions[i]=$((ORIGIN_VERSION + 1 + i))
done
echo "UPGRADE: Migration path is ${versions[@]}"
log_info "Migration path is ${versions[*]}"
########################
# DATABASE PREPARATION #
########################
log_step "DATABASE PREPARATION"
echo "
==== DATABASE PREPARATION ===="
./prepare_db.sh "$COPY_DB_NAME" "$COPY_DB_NAME" "$FINALE_DB_MODEL_NAME" "$FINALE_SERVICE_NAME" || exit 1
./prepare_db.sh "$COPY_DB_NAME" "$COPY_DB_NAME" "$FINALE_DB_MODEL_NAME" "$FINALE_SERVICE_NAME"
###################
# UPGRADE PROCESS #
###################
log_step "UPGRADE PROCESS"
for version in "${versions[@]}"
do
echo "START UPGRADE TO ${version}.0"
start_version=$((version-1))
end_version="$version"
for version in "${versions[@]}"; do
log_info "START UPGRADE TO ${version}.0"
### Go to the repository holding the upgrate scripts
cd "${end_version}.0"
cd "${version}.0"
### Execute pre_upgrade scripts
./pre_upgrade.sh || exit 1
./pre_upgrade.sh
./upgrade.sh
./post_upgrade.sh
### Start upgrade
./upgrade.sh || exit 1
### Execute post-upgrade scripts
./post_upgrade.sh || exit 1
### Return to parent repository for the following steps
cd ..
echo "END UPGRADE TO ${version}.0"
log_info "END UPGRADE TO ${version}.0"
done
## END UPGRADE LOOP
##########################
# POST-UPGRADE PROCESSES #
##########################
./finalize_db.sh "$FINALE_DB_NAME" "$FINALE_SERVICE_NAME" || exit 1
log_step "POST-UPGRADE PROCESSES"
./finalize_db.sh "$FINALE_DB_NAME" "$FINALE_SERVICE_NAME"
echo "UPGRADE PROCESS ENDED WITH SUCCESS"
log_step "UPGRADE PROCESS ENDED WITH SUCCESS"