Compare commits

24 Commits

Author SHA1 Message Date
Stéphan Sainléger
ec0a0cbd46 [IMP] add migrations script to use bank-payment-alternative addons in 18.0 2026-02-06 11:31:18 +01:00
Stéphan Sainléger
82b4713f02 [NEW] add post-migration views validation process 2026-02-04 11:38:55 +01:00
Stéphan Sainléger
54057611eb [IMP] include Elabore and Lokavaluto add-ons in missing add-ons detection process 2026-02-04 00:04:39 +01:00
Stéphan Sainléger
b239176afe [FIX] correct final database detection 2026-02-04 00:02:11 +01:00
Stéphan Sainléger
ee27536011 [FIX] use relative path for compose to avoid 0k dev-pack IOError
The 0k dev-pack's compose script doesn't handle absolute paths correctly.
It passes HOST_COMPOSE_YML_FILE to the container, which tries to open
it directly instead of using the mounted path.

Add run_compose() wrapper that changes to PROJECT_ROOT before calling
compose with a relative path, ensuring consistent behavior regardless
of the current working directory.
2026-02-03 17:15:20 +01:00
Stéphan Sainléger
ebc1adb4fa [IMP] rewrite README with comprehensive documentation
Add complete documentation in French including:
- Table of contents for easy navigation
- Prerequisites section (0k dev-pack, Docker, rsync, sudo)
- Project structure explanation with directory tree
- Detailed workflow explanation with step-by-step breakdown
- ASCII diagram showing migration flow
- Usage examples with command-line syntax
- Customization guide for version-specific scripts
- Troubleshooting section with common issues and solutions

Replace the previous minimal README that only contained basic
installation and configuration notes.
2026-02-02 23:48:28 +01:00
Stéphan Sainléger
8d2b151a85 [IMP] update all script paths for new directory structure
Update all path references to match the new directory layout:

upgrade.sh:
  - ./prepare_db.sh -> ${SCRIPT_DIR}/scripts/prepare_db.sh
  - ./finalize_db.sh -> ${SCRIPT_DIR}/scripts/finalize_db.sh
  - ${SCRIPT_DIR}/${version}.0/ -> ${SCRIPT_DIR}/versions/${version}.0/

scripts/prepare_db.sh:
  - pre_migration_view_checking.py -> ${SCRIPT_DIR}/lib/python/check_views.py

scripts/finalize_db.sh:
  - post_migration_fix_duplicated_views.py -> ${SCRIPT_DIR}/lib/python/fix_duplicated_views.py
  - post_migration_cleanup_obsolete_modules.py -> ${SCRIPT_DIR}/lib/python/cleanup_modules.py

versions/*/upgrade.sh:
  - ../compose.yml -> ../../config/compose.yml
2026-02-02 22:11:15 +01:00
Stéphan Sainléger
245ddcc3f9 [IMP] reorganize project directory structure
Restructure the project for better organization and maintainability:

New structure:
  ./upgrade.sh              - Main entry point (unchanged)
  ./lib/common.sh           - Shared bash functions
  ./lib/python/             - Python utility scripts
  ./scripts/                - Workflow scripts (prepare_db, finalize_db)
  ./config/                 - Configuration files (compose.yml)
  ./versions/{13..18}.0/    - Version-specific migration scripts

File renames:
  - pre_migration_view_checking.py -> lib/python/check_views.py
  - post_migration_fix_duplicated_views.py -> lib/python/fix_duplicated_views.py
  - post_migration_cleanup_obsolete_modules.py -> lib/python/cleanup_modules.py

Benefits:
  - Single entry point visible at root level
  - Clear separation between shared code, scripts, and config
  - Shorter, cleaner Python script names (context given by caller)
  - Easier navigation and maintenance
2026-02-02 22:10:01 +01:00
Stéphan Sainléger
eb95a8152a [IMP] avoid directory changes in migration loop
Replace cd into version directories with absolute path execution:

Before:
  cd "${version}.0"
  ./pre_upgrade.sh
  ./upgrade.sh
  ./post_upgrade.sh
  cd ..

After:
  "${SCRIPT_DIR}/${version}.0/pre_upgrade.sh"
  "${SCRIPT_DIR}/${version}.0/upgrade.sh"
  "${SCRIPT_DIR}/${version}.0/post_upgrade.sh"

Benefits:
- No working directory state to track
- More robust: script works regardless of where it's called from
- Easier debugging: no need to remember current directory
- Avoids potential issues if a subscript changes directory
2026-02-02 22:04:49 +01:00
Stéphan Sainléger
febe877043 [FIX] correct undefined variable FINALE_DB_MODEL_NAME
Replace $FINALE_DB_MODEL_NAME with $FINALE_DB_NAME in the call to
prepare_db.sh.

FINALE_DB_MODEL_NAME was never defined anywhere in the codebase,
causing the script to fail immediately with 'set -u' (unbound variable
error). The intended variable is FINALE_DB_NAME which contains the
target database name (e.g., 'ou16').
2026-02-02 22:04:49 +01:00
Stéphan Sainléger
f07a654c22 [IMP] factor out user confirmation prompts into reusable function
Add confirm_or_exit() function to lib/common.sh to eliminate duplicated
confirmation dialog code in prepare_db.sh.

Before: Two 10-line case statements with identical logic
After: Two single-line function calls

The function provides consistent behavior:
- Displays the question with Y/N options
- Returns 0 on Y/y (continue execution)
- Exits with error on any other input

This follows DRY principle and ensures all confirmation prompts
behave identically across the codebase.
2026-02-02 22:04:49 +01:00
Stéphan Sainléger
60d25124c4 [IMP] use rsync instead of cp for filestore copy
Replace mkdir + rm -rf + cp -a sequence with rsync --delete:

Before (3 commands):
  sudo mkdir -p "$dst_path"
  sudo rm -rf "$dst_path"
  sudo cp -a "$src_path" "$dst_path"

After (2 commands):
  sudo mkdir -p "$(dirname "$dst_path")"
  sudo rsync -a --delete "${src_path}/" "${dst_path}/"

Benefits:
- Incremental copy: only transfers changed files on re-run
- Atomic delete + copy: --delete removes extra files in destination
- Preserves all attributes like cp -a
- Faster for large filestores when re-running migration

Added rsync to required commands check.
2026-02-02 22:04:49 +01:00
Stéphan Sainléger
67c2d5a061 [IMP] combine SQL queries into single transaction with documentation
Merge three separate SQL queries into one for better performance:
- 1 database connection instead of 3
- Atomic execution of all cleanup operations

Added detailed SQL comments explaining each operation:
- DROP SEQUENCE: Why stale sequences prevent Odoo startup
- UPDATE ir_ui_view: Why website templates are reset except pages
- DELETE ir_attachment: Why compiled assets must be purged

Also changed DROP SEQUENCE to DROP SEQUENCE IF EXISTS to avoid
errors if sequences don't exist.
2026-02-02 22:04:49 +01:00
Stéphan Sainléger
e17db5d062 [IMP] simplify migration path construction with seq
Replace manual loop building version array with seq + readarray:

Before (4 lines):
  declare -a versions
  nb_migrations=$((FINAL_VERSION - ORIGIN_VERSION))
  for ((i = 0; i < nb_migrations; i++)); do
      versions[i]=$((ORIGIN_VERSION + 1 + i))
  done

After (1 line):
  readarray -t versions < <(seq $((ORIGIN_VERSION + 1)) "$FINAL_VERSION")

The seq command is purpose-built for generating number sequences,
making the intent clearer and the code more concise.
2026-02-02 22:04:49 +01:00
Stéphan Sainléger
89cc3be05e [IMP] simplify PostgreSQL container detection with readarray
Replace double grep pattern with readarray for cleaner container detection:
- Single grep call instead of two
- Native bash array instead of string manipulation
- Array length check instead of grep -c
- Proper formatting when listing multiple containers

The readarray approach is more idiomatic and avoids edge cases with
empty strings and newline handling.
2026-02-02 22:04:49 +01:00
Stéphan Sainléger
22d5b6af7e [IMP] remove redundant SQL query and grep for missing addons
The SQL query already filters on module_origin.state = 'installed',
so the second query to get installed addons and the grep intersection
were completely redundant.

Before: 2 SQL queries + grep + 3 temp files
After: 1 SQL query + variable

This simplifies the code and reduces database round-trips.
2026-02-02 22:04:49 +01:00
Stéphan Sainléger
00c12769bc [IMP] add external command verification at startup
Add check_required_commands() function to verify that all required
external tools are available before the script begins execution:
- docker: Container runtime
- compose: Docker compose wrapper (0k-scripts)
- sudo: Required for filestore operations

Benefits:
- Fails fast with a clear error message listing missing commands
- Prevents cryptic 'command not found' errors mid-execution
- Documents script dependencies explicitly
- Called immediately after argument validation in upgrade.sh
2026-02-02 22:04:49 +01:00
Stéphan Sainléger
4bdedf3759 [IMP] apply naming conventions for variables
Apply consistent naming conventions throughout upgrade.sh:
- UPPERCASE + readonly for script-level constants (immutable values)
- lowercase for temporary/local variables within the script flow

Constants marked readonly:
- ORIGIN_VERSION, FINAL_VERSION, ORIGIN_DB_NAME, ORIGIN_SERVICE_NAME
- COPY_DB_NAME, FINALE_DB_NAME, FINALE_SERVICE_NAME
- POSTGRES_SERVICE_NAME

Local variables renamed to lowercase:
- postgres_containers, postgres_count (detection phase)
- db_exists, filestore_path (validation phase)

This convention makes it immediately clear which variables are
configuration constants vs runtime values, and prevents accidental
modification of critical values.
2026-02-02 22:04:49 +01:00
Stéphan Sainléger
1027428bfd [IMP] use mktemp and trap for temporary file cleanup
Replace hardcoded temporary file paths with mktemp -d for secure
temporary directory creation, and add a trap to automatically clean
up on script exit (success, failure, or interruption).

Benefits:
- Automatic cleanup even on Ctrl+C or script errors
- No leftover temporary files in the working directory
- Secure temporary directory creation (proper permissions)
- Files isolated in dedicated temp directory

Added '|| true' to grep command since it returns exit code 1 when
no matches are found, which would trigger set -e otherwise.
2026-02-02 22:04:49 +01:00
Stéphan Sainléger
01e23cc92c [IMP] use heredoc with variable expansion for SQL query
Convert the SQL_404_ADDONS_LIST query from a quoted string to a heredoc
without quotes (<<EOF instead of <<'EOF') to make variable expansion
explicit and consistent with other SQL blocks in the codebase.

Key difference between heredoc variants:
- <<'EOF': Literal content, no variable expansion (use for static SQL)
- <<EOF: Variables like ${FINALE_DB_NAME} are expanded (use when needed)

Also improved SQL formatting for better readability.
2026-02-02 22:04:49 +01:00
Stéphan Sainléger
d3f0998036 [IMP] add structured logging functions
Add logging functions to lib/common.sh for consistent output formatting:
- log_info(): Standard informational messages with [INFO] prefix
- log_warn(): Warning messages to stderr with [WARN] prefix
- log_error(): Error messages to stderr with [ERROR] prefix
- log_step(): Section headers with visual separators

Update upgrade.sh to use these functions throughout, replacing ad-hoc
echo statements. This provides:
- Consistent visual formatting across all scripts
- Clear distinction between info, warnings and errors
- Errors properly sent to stderr
- Easier log parsing and filtering

Also removed redundant '|| exit 1' statements since set -e handles
command failures automatically.
2026-02-02 22:04:49 +01:00
Stéphan Sainléger
914ae34f12 [IMP] centralize common functions in lib/common.sh
Extract shared utility functions into a dedicated library file:
- query_postgres_container: Execute SQL queries in postgres container
- copy_database: Copy database using pgm
- copy_filestore: Copy Odoo filestore directory
- exec_python_script_in_odoo_shell: Run Python scripts in Odoo shell

Benefits:
- Single source of truth for utility functions
- Easier maintenance and testing
- Consistent behavior across all scripts
- Reduced code duplication

Also introduces readonly constants DATASTORE_PATH and FILESTORE_SUBPATH
to avoid hardcoded paths scattered throughout the codebase.
2026-02-02 22:04:49 +01:00
Stéphan Sainléger
176fa0957c [FIX] correct undefined variable DB_CONTAINER_NAME
Replace $DB_CONTAINER_NAME with $POSTGRES_SERVICE_NAME which is the
correct variable exported from the parent script (upgrade.sh).

DB_CONTAINER_NAME was never defined, causing the script to fail
immediately with 'set -u' enabled (unbound variable error). The
intended variable is POSTGRES_SERVICE_NAME which contains the name
of the PostgreSQL container detected at runtime.
2026-02-02 22:04:41 +01:00
Stéphan Sainléger
8061d52d25 [FIX] correct return statement outside function
Replace 'return 1' with 'exit 1' in prepare_db.sh.

The 'return' statement is only valid inside functions. When used at
the script's top level, it behaves unpredictably - in some shells it
exits the script, in others it's an error. Using 'exit 1' explicitly
terminates the script with an error status, which is the intended
behavior when the PostgreSQL container is not running.
2026-02-02 22:04:20 +01:00
39 changed files with 1602 additions and 428 deletions

View File

@@ -1,4 +0,0 @@
#!/bin/bash
set -euo pipefail
compose -f ../compose.yml run -p 8013:8069 ou13 --config=/opt/odoo/auto/odoo.conf --stop-after-init -u all --workers 0 --log-level=debug --max-cron-threads=0 --limit-time-real=10000 --database=ou13

View File

@@ -1,4 +0,0 @@
#!/bin/bash
set -euo pipefail
compose -f ../compose.yml run -p 8014:8069 ou14 --config=/opt/odoo/auto/odoo.conf --stop-after-init -u all --workers 0 --log-level=debug --max-cron-threads=0 --limit-time-real=10000 --database=ou14 --load=base,web,openupgrade_framework

View File

@@ -1,4 +0,0 @@
#!/bin/bash
set -euo pipefail
compose -f ../compose.yml run -p 8015:8069 ou15 --config=/opt/odoo/auto/odoo.conf --stop-after-init -u all --workers 0 --log-level=debug --max-cron-threads=0 --limit-time-real=10000 --database=ou15 --load=base,web,openupgrade_framework

View File

@@ -1,4 +0,0 @@
#!/bin/bash
set -euo pipefail
compose -f ../compose.yml run -p 8016:8069 ou16 --config=/opt/odoo/auto/odoo.conf --stop-after-init -u all --workers 0 --log-level=debug --max-cron-threads=0 --limit-time-real=10000 --database=ou16 --load=base,web,openupgrade_framework

View File

@@ -1,4 +0,0 @@
#!/bin/bash
set -euo pipefail
compose -f ../compose.yml run -p 8017:8069 ou17 --config=/opt/odoo/auto/odoo.conf --stop-after-init -u all --workers 0 --log-level=debug --max-cron-threads=0 --limit-time-real=10000 --database=ou17 --load=base,web,openupgrade_framework

View File

@@ -1,6 +0,0 @@
#!/bin/bash
set -euo pipefail
echo "Post migration to 18.0..."
#compose --debug run ou18 -u base --stop-after-init --no-http

View File

@@ -1,20 +0,0 @@
#!/bin/bash
set -euo pipefail
echo "Prepare migration to 18.0..."
# Copy database
copy_database ou17 ou18 ou18 || exit 1
# Execute SQL pre-migration commands
PRE_MIGRATE_SQL=$(cat <<'EOF'
UPDATE account_analytic_plan SET default_applicability=NULL WHERE default_applicability='optional';
EOF
)
echo "SQL command = $PRE_MIGRATE_SQL"
query_postgres_container "$PRE_MIGRATE_SQL" ou18 || exit 1
# Copy filestores
copy_filestore ou17 ou17 ou18 ou18 || exit 1
echo "Ready for migration to 18.0!"

View File

@@ -1,4 +0,0 @@
#!/bin/bash
set -euo pipefail
compose -f ../compose.yml run -p 8018:8069 ou18 --config=/opt/odoo/auto/odoo.conf --stop-after-init -u all --workers 0 --log-level=debug --max-cron-threads=0 --limit-time-real=10000 --database=ou18 --load=base,web,openupgrade_framework

385
README.md
View File

@@ -1,64 +1,377 @@
# 0k-odoo-upgrade
A tool for migrating Odoo databases between major versions, using [OpenUpgrade](https://github.com/OCA/OpenUpgrade) in a production-like Docker environment.
## Table of Contents
- [Prerequisites](#prerequisites)
- [Installation](#installation)
- [Project Structure](#project-structure)
- [How It Works](#how-it-works)
- [Usage](#usage)
- [Customization](#customization)
- [Troubleshooting](#troubleshooting)
## Prerequisites
- [0k dev-pack](https://git.myceliandre.fr/Lokavaluto/dev-pack) installed (provides the `compose` command)
- Docker and Docker Compose
- `rsync` for filestore copying
- `sudo` access for filestore operations
## Installation
- Clone the current repo
```bash
git clone <repository-url>
cd 0k-odoo-upgrade
```
## Configuration
## Project Structure
- Requires to have the 0k scripts installed on the computer: https://git.myceliandre.fr/Lokavaluto/dev-pack
```
.
├── upgrade.sh # Main entry point
├── config/
│ └── compose.yml # Docker Compose configuration
├── lib/
│ ├── common.sh # Shared bash functions
│ └── python/ # Python utility scripts
│ ├── check_views.py # View analysis (pre-migration)
│ ├── validate_views.py # View validation (post-migration)
│ ├── fix_duplicated_views.py # Fix duplicated views
│ └── cleanup_modules.py # Obsolete module cleanup
├── scripts/
│ ├── prepare_db.sh # Database preparation before migration
│ ├── finalize_db.sh # Post-migration finalization
│ └── validate_migration.sh # Manual post-migration validation
└── versions/ # Version-specific scripts
├── 13.0/
│ ├── pre_upgrade.sh # SQL fixes before migration
│ ├── upgrade.sh # OpenUpgrade execution
│ └── post_upgrade.sh # Fixes after migration
├── 14.0/
├── ...
└── 18.0/
```
## How It Works
### Overview
The script performs a **step-by-step migration** between each major version. For example, to migrate from 14.0 to 17.0, it executes:
```
14.0 → 15.0 → 16.0 → 17.0
```
### Process Steps
1. **Initial Checks**
- Argument validation
- Required command verification (`docker`, `compose`, `sudo`, `rsync`)
- Source database and filestore existence check
2. **Environment Preparation**
- Creation of a fresh Odoo database in the target version (for module comparison)
- Copy of the source database to a working database
- Filestore copy
3. **Database Preparation** (`scripts/prepare_db.sh`)
- Neutralization: disable mail servers and cron jobs
- Detection of installed modules missing in the target version
- View state verification
- User confirmation prompt
4. **Migration Loop** (for each intermediate version)
- `pre_upgrade.sh`: version-specific SQL fixes before migration
- `upgrade.sh`: OpenUpgrade execution via Docker
- `post_upgrade.sh`: fixes after migration
5. **Finalization** (`scripts/finalize_db.sh`)
- Obsolete sequence removal
- Modified website template reset
- Compiled asset cache purge
- Duplicated view fixes
- Obsolete module cleanup
- Final update with `-u all`
### Flow Diagram
```
┌─────────────────┐
│ upgrade.sh │
└────────┬────────┘
┌─────────────────┐ ┌─────────────────┐
│ Initial │────▶│ Copy DB + │
│ checks │ │ filestore │
└─────────────────┘ └────────┬────────┘
┌─────────────────┐
│ prepare_db.sh │
│ (neutralization)│
└────────┬────────┘
┌───────────────────────┼───────────────────────┐
│ │ │
▼ ▼ ▼
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ versions/13.0/ │────▶│ versions/14.0/ │────▶│ versions/N.0/ │
│ pre/upgrade/post│ │ pre/upgrade/post│ │ pre/upgrade/post│
└─────────────────┘ └─────────────────┘ └────────┬────────┘
┌─────────────────┐
│ finalize_db.sh │
│ (cleanup) │
└─────────────────┘
```
## Usage
### Before migration
- [ ] import the origin database to migrate on local computer
- [ ] Uninstall all known useless Odoo add-ons. Warning: do not uninstall add-ons for which the disappearance in the finale version is managed by Open Upgrade scrips.
- [ ] Unsure all the add-ons are migrated in the final Odoo version
- [ ] (optional) De-active all the website views
### Before Migration
### Local Migration process
1. **Import the source database** to your local machine
- [ ] launch the origin database `ORIGIN_DATABASE_NAME` with original version of Odoo, with odoo service `ORIGIN_SERVICE`
- [ ] launch the following command:
2. **Clean up the source database** (recommended)
- Uninstall unnecessary modules
- Do NOT uninstall modules handled by OpenUpgrade
``` bash
./upgrade.sh {ORIGIN_VERSION} {DESTINATION_VERSION} {ORIGIN_DATABASE_NAME} {ORIGIN_SERVICE}
3. **Check module availability**
- Ensure all custom modules are ported to the target version
4. **Start the Docker environment**
```bash
# Start the PostgreSQL container
compose up -d postgres
# Verify only one postgres container is running
docker ps | grep postgres
```
### Running the Migration
```bash
./upgrade.sh <source_version> <target_version> <database_name> <source_service>
```
ex: ./upgrade.sh 14 16 elabore_20241208 odoo14
- [ ] Inspect the list of add-ons identified as missing in the final Odoo docker image:
- if you want to uninstall some of them:
- STOP the process (N)
- uninstall the concernet add-ons manually
- launch the migration script again
- if the list suits you, show can go on (Y)!
**Parameters:**
| Parameter | Description | Example |
|-----------|-------------|---------|
| `source_version` | Source Odoo version (without .0) | `14` |
| `target_version` | Target Odoo version (without .0) | `17` |
| `database_name` | Database name | `my_prod_db` |
| `source_service` | Source Docker Compose service | `odoo14` |
The migration process should run all the middle-migrations until the last one without action needed from you.
**Example:**
```bash
./upgrade.sh 14 17 elabore_20241208 odoo14
```
### Deploy migrated base
### During Migration
- [ ] Retrieve the migrated database (vps odoo dump)
- [ ] Copy the database on the concerned VPS
- [ ] vps odoo restore
The script will prompt for confirmation at two points:
1. **Missing modules list**: installed modules that don't exist in the target version
- `Y`: continue (modules will be marked for removal)
- `N`: abort to manually uninstall certain modules
## Manage the add-ons to uninstall
2. **View state**: verification of potentially problematic views
- `Y`: continue
- `N`: abort to manually fix issues
The migration script will manage the uninstall of Odoo add-ons:
- add-ons we want to uninstall, whatever the reasons
- add-ons to uninstall because they do not exist in the final Odoo docker image
### After Migration
At the beginning of the process, the script compare the list of add-ons installed in the origin database, and the list of add-ons available in the finlal Odoo docker image.
1. **Review logs** to detect any non-blocking errors
The whole list of add-ons to uninstall is displayed, and needs a confirmation before starting the migration.
2. **Validate the migration** (see [Post-Migration Validation](#post-migration-validation))
## Customize the migration scripts
3. **Test the migrated database** locally
FEATURE COMING SOON...
4. **Deploy to production**
```bash
# Export the migrated database
vps odoo dump db_migrated.zip
# On the production server
vps odoo restore db_migrated.zip
```
## Manage migration issues
## Post-Migration Validation
As the migration process is performed on a copy of the orginal database, the process can be restarted without limits.
After migration, use the validation script to check for broken views and XPath errors.
Some Odoo migration errors won't stop the migration process, then be attentive to the errors in the logs.
### Quick Start
```bash
./scripts/validate_migration.sh ou17 odoo17
```
### What Gets Validated
Runs in Odoo shell, no HTTP server needed:
| Check | Description |
|-------|-------------|
| **Inherited views** | Verifies all inherited views can combine with their parent |
| **XPath targets** | Ensures XPath expressions find their targets in parent views |
| **QWeb templates** | Validates QWeb templates are syntactically correct |
| **Field references** | Checks that field references point to existing model fields |
| **Odoo native** | Runs Odoo's built-in `_validate_custom_views()` |
### Running Directly
You can also run the Python script directly in Odoo shell:
```bash
compose run odoo17 shell -d ou17 --no-http --stop-after-init < lib/python/validate_views.py
```
### Output
- **Colored terminal output** with `[OK]`, `[ERROR]`, `[WARN]` indicators
- **JSON report** written to `/tmp/validation_views_<db>_<timestamp>.json`
- **Exit code**: `0` = success, `1` = errors found
## Customization
### Version Scripts
Each `versions/X.0/` directory contains three scripts you can customize:
#### `pre_upgrade.sh`
Executed **before** OpenUpgrade. Use it to:
- Add missing columns expected by OpenUpgrade
- Fix incompatible data
- Remove problematic records
```bash
#!/bin/bash
set -euo pipefail
echo "Prepare migration to 15.0..."
copy_database ou14 ou15 ou15
PRE_MIGRATE_SQL=$(cat <<'EOF'
-- Example: remove a problematic module
DELETE FROM ir_module_module WHERE name = 'obsolete_module';
EOF
)
query_postgres_container "$PRE_MIGRATE_SQL" ou15
copy_filestore ou14 ou14 ou15 ou15
echo "Ready for migration to 15.0!"
```
#### `upgrade.sh`
Runs OpenUpgrade migration scripts.
#### `post_upgrade.sh`
Executed **after** OpenUpgrade. Use it to:
- Fix incorrectly migrated data
- Remove orphan records
- Update system parameters
```bash
#!/bin/bash
set -euo pipefail
echo "Post migration to 15.0..."
POST_MIGRATE_SQL=$(cat <<'EOF'
-- Example: fix a configuration value
UPDATE ir_config_parameter
SET value = 'new_value'
WHERE key = 'my_key';
EOF
)
query_postgres_container "$POST_MIGRATE_SQL" ou15
```
### Available Functions
Version scripts have access to functions defined in `lib/common.sh`:
| Function | Description |
|----------|-------------|
| `query_postgres_container "$SQL" "$DB"` | Execute an SQL query |
| `copy_database $from $to_service $to_db` | Copy a PostgreSQL database |
| `copy_filestore $from_svc $from_db $to_svc $to_db` | Copy a filestore |
| `log_info`, `log_warn`, `log_error` | Logging functions |
| `log_step "title"` | Display a section header |
### Adding a New Version
To add support for a new version (e.g., 19.0):
```bash
mkdir versions/19.0
cp versions/18.0/*.sh versions/19.0/
# Edit the scripts to:
# - Change references from ou18 → ou19
# - Change the port from -p 8018:8069 → -p 8019:8069
# - Add SQL fixes specific to this migration
```
## Troubleshooting
### Common Issues
#### "No running PostgreSQL container found"
```bash
# Check active containers
docker ps | grep postgres
# Start the container if needed
compose up -d postgres
```
#### "Multiple PostgreSQL containers found"
Stop the extra PostgreSQL containers:
```bash
docker stop <container_name_to_stop>
```
#### "Database not found"
The source database must exist in PostgreSQL:
```bash
# List databases
docker exec -u 70 <postgres_container> psql -l
# Import a database if needed
docker exec -u 70 <postgres_container> pgm restore <file.zip>
```
#### "Filestore not found"
The filestore must be present at `/srv/datastore/data/<service>/var/lib/odoo/filestore/<database>/`
### Restarting After an Error
The script works on a **copy** of the original database. You can restart as many times as needed:
```bash
# Simply restart - the copy will be recreated
./upgrade.sh 14 17 my_database odoo14
```
### Viewing Detailed Logs
Odoo/OpenUpgrade logs are displayed in real-time. For a problematic migration:
1. Note the version where the error occurs
2. Check the logs to identify the problematic module/table
3. Add a fix in the `pre_upgrade.sh` for that version
4. Restart the migration
## License
See the [LICENSE](LICENSE) file.

View File

@@ -1,52 +0,0 @@
#!/bin/bash
set -euo pipefail
DB_NAME="$1"
ODOO_SERVICE="$2"
FINALE_SQL=$(cat <<'EOF'
/*Delete sequences that prevent Odoo to start*/
drop sequence base_registry_signaling;
drop sequence base_cache_signaling;
EOF
)
query_postgres_container "$FINALE_SQL" "$DB_NAME" || exit 1
# Fix duplicated views
PYTHON_SCRIPT=post_migration_fix_duplicated_views.py
echo "Remove duplicated views with script $PYTHON_SCRIPT ..."
exec_python_script_in_odoo_shell "$DB_NAME" "$DB_NAME" "$PYTHON_SCRIPT" || exit 1
# Reset all website templates with custom content
FINALE_SQL_2=$(cat <<'EOF'
UPDATE ir_ui_view
SET arch_db = NULL
WHERE arch_fs IS NOT NULL
AND arch_fs LIKE 'website/%'
AND arch_db IS NOT NULL
AND id NOT IN (SELECT view_id FROM website_page);
EOF
)
query_postgres_container "$FINALE_SQL_2" "$DB_NAME" || exit 1
# Purge QWeb cache from compiled assets
FINALE_SQL_3=$(cat <<'EOF'
DELETE FROM ir_attachment
WHERE name LIKE '/web/assets/%'
OR name LIKE '%.assets_%'
OR (res_model = 'ir.ui.view' AND mimetype = 'text/css');
EOF
)
query_postgres_container "$FINALE_SQL_3" "$DB_NAME" || exit 1
# Uninstall obsolette add-ons
PYTHON_SCRIPT=post_migration_cleanup_obsolete_modules.py
echo "Uninstall obsolete add-ons with script $PYTHON_SCRIPT ..."
exec_python_script_in_odoo_shell "$DB_NAME" "$DB_NAME" "$PYTHON_SCRIPT" || exit 1
# Give back the right to user to access to the tables
# docker exec -u 70 "$DB_CONTAINER_NAME" pgm chown "$FINALE_SERVICE_NAME" "$DB_NAME"
# Launch Odoo with database in finale version to run all updates
compose --debug run "$ODOO_SERVICE" -u all --log-level=debug --stop-after-init --no-http

106
lib/common.sh Normal file
View File

@@ -0,0 +1,106 @@
#!/bin/bash
#
# Common functions for Odoo migration scripts
# Source this file from other scripts: source "$(dirname "$0")/lib/common.sh"
#
set -euo pipefail
# Get the absolute path of the project root (parent of lib/)
readonly PROJECT_ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)"
readonly DATASTORE_PATH="/srv/datastore/data"
readonly FILESTORE_SUBPATH="var/lib/odoo/filestore"
check_required_commands() {
local missing=()
for cmd in docker compose sudo rsync; do
if ! command -v "$cmd" &>/dev/null; then
missing+=("$cmd")
fi
done
if [[ ${#missing[@]} -gt 0 ]]; then
log_error "Required commands not found: ${missing[*]}"
log_error "Please install them before running this script."
exit 1
fi
}
log_info() { printf "[INFO] %s\n" "$*"; }
log_warn() { printf "[WARN] %s\n" "$*" >&2; }
log_error() { printf "[ERROR] %s\n" "$*" >&2; }
log_step() { printf "\n===== %s =====\n" "$*"; }
confirm_or_exit() {
local message="$1"
local choice
echo ""
echo "$message"
echo "Y - Yes, continue"
echo "N - No, cancel"
read -r -n 1 -p "Your choice: " choice
echo ""
case "$choice" in
[Yy]) return 0 ;;
*) log_error "Cancelled by user."; exit 1 ;;
esac
}
query_postgres_container() {
local query="$1"
local db_name="$2"
if [[ -z "$query" ]]; then
return 0
fi
local result
if ! result=$(docker exec -u 70 "$POSTGRES_SERVICE_NAME" psql -d "$db_name" -t -A -c "$query"); then
printf "Failed to execute SQL query: %s\n" "$query" >&2
printf "Error: %s\n" "$result" >&2
return 1
fi
echo "$result"
}
copy_database() {
local from_db="$1"
local to_service="$2"
local to_db="$3"
docker exec -u 70 "$POSTGRES_SERVICE_NAME" pgm cp -f "$from_db" "${to_db}@${to_service}"
}
copy_filestore() {
local from_service="$1"
local from_db="$2"
local to_service="$3"
local to_db="$4"
local src_path="${DATASTORE_PATH}/${from_service}/${FILESTORE_SUBPATH}/${from_db}"
local dst_path="${DATASTORE_PATH}/${to_service}/${FILESTORE_SUBPATH}/${to_db}"
sudo mkdir -p "$(dirname "$dst_path")"
sudo rsync -a --delete "${src_path}/" "${dst_path}/"
echo "Filestore ${from_service}/${from_db} copied to ${to_service}/${to_db}."
}
# Workaround: 0k dev-pack's compose script doesn't handle absolute paths correctly.
# It passes HOST_COMPOSE_YML_FILE to the container, which tries to open it directly
# instead of using the mounted path. Using a relative path from PROJECT_ROOT avoids this.
run_compose() {
(cd "$PROJECT_ROOT" && compose -f ./config/compose.yml "$@")
}
exec_python_script_in_odoo_shell() {
local service_name="$1"
local db_name="$2"
local python_script="$3"
run_compose --debug run "$service_name" shell -d "$db_name" --no-http --stop-after-init < "$python_script"
}
export PROJECT_ROOT DATASTORE_PATH FILESTORE_SUBPATH
export -f log_info log_warn log_error log_step confirm_or_exit
export -f check_required_commands
export -f query_postgres_container copy_database copy_filestore run_compose exec_python_script_in_odoo_shell

521
lib/python/validate_views.py Executable file
View File

@@ -0,0 +1,521 @@
#!/usr/bin/env python3
"""
Post-Migration View Validation Script for Odoo
Validates all views after migration to detect:
- Broken XPath expressions in inherited views
- Views that fail to combine with their parent
- Invalid QWeb templates
- Missing asset files
- Field references to non-existent fields
Usage:
odoo-bin shell -d <database> < validate_views.py
# Or with compose:
compose run <service> shell -d <database> --no-http --stop-after-init < validate_views.py
Exit codes:
0 - All validations passed
1 - Validation errors found (see report)
"""
import os
import sys
import re
import json
from datetime import datetime
from collections import defaultdict
from lxml import etree
# ANSI colors for terminal output
class Colors:
RED = '\033[91m'
GREEN = '\033[92m'
YELLOW = '\033[93m'
BLUE = '\033[94m'
BOLD = '\033[1m'
END = '\033[0m'
def print_header(title):
"""Print a formatted section header."""
print(f"\n{Colors.BOLD}{'='*80}{Colors.END}")
print(f"{Colors.BOLD}{title}{Colors.END}")
print(f"{Colors.BOLD}{'='*80}{Colors.END}\n")
def print_subheader(title):
"""Print a formatted subsection header."""
print(f"\n{Colors.BLUE}{'-'*60}{Colors.END}")
print(f"{Colors.BLUE}{title}{Colors.END}")
print(f"{Colors.BLUE}{'-'*60}{Colors.END}\n")
def print_ok(message):
"""Print success message."""
print(f"{Colors.GREEN}[OK]{Colors.END} {message}")
def print_error(message):
"""Print error message."""
print(f"{Colors.RED}[ERROR]{Colors.END} {message}")
def print_warn(message):
"""Print warning message."""
print(f"{Colors.YELLOW}[WARN]{Colors.END} {message}")
def print_info(message):
"""Print info message."""
print(f"{Colors.BLUE}[INFO]{Colors.END} {message}")
class ViewValidator:
"""Validates Odoo views after migration."""
def __init__(self, env):
self.env = env
self.View = env['ir.ui.view']
self.errors = []
self.warnings = []
self.stats = {
'total_views': 0,
'inherited_views': 0,
'qweb_views': 0,
'broken_xpath': 0,
'broken_combine': 0,
'broken_qweb': 0,
'broken_fields': 0,
'missing_assets': 0,
}
def validate_all(self):
"""Run all validation checks."""
print_header("ODOO VIEW VALIDATION - POST-MIGRATION")
print(f"Started at: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}")
print(f"Database: {self.env.cr.dbname}")
# Get all active views
all_views = self.View.search([('active', '=', True)])
self.stats['total_views'] = len(all_views)
print_info(f"Total active views to validate: {len(all_views)}")
# Run validations
self._validate_inherited_views()
self._validate_xpath_targets()
self._validate_qweb_templates()
self._validate_field_references()
self._validate_odoo_native()
self._check_assets()
# Print summary
self._print_summary()
# Rollback to avoid any accidental changes
self.env.cr.rollback()
return len(self.errors) == 0
def _validate_inherited_views(self):
"""Check that all inherited views can combine with their parent."""
print_subheader("1. Validating Inherited Views (Combination)")
inherited_views = self.View.search([
('inherit_id', '!=', False),
('active', '=', True)
])
self.stats['inherited_views'] = len(inherited_views)
print_info(f"Found {len(inherited_views)} inherited views to check")
broken = []
for view in inherited_views:
try:
# Attempt to get combined architecture
view._get_combined_arch()
except Exception as e:
broken.append({
'view_id': view.id,
'xml_id': view.xml_id or 'N/A',
'name': view.name,
'model': view.model,
'parent_xml_id': view.inherit_id.xml_id if view.inherit_id else 'N/A',
'error': str(e)[:200]
})
self.stats['broken_combine'] = len(broken)
if broken:
for item in broken:
error_msg = (
f"View '{item['xml_id']}' (ID: {item['view_id']}) "
f"cannot combine with parent '{item['parent_xml_id']}': {item['error']}"
)
print_error(error_msg)
self.errors.append({
'type': 'combination_error',
'severity': 'error',
**item
})
else:
print_ok("All inherited views combine correctly with their parents")
def _validate_xpath_targets(self):
"""Check that XPath expressions find their targets in parent views."""
print_subheader("2. Validating XPath Targets")
inherited_views = self.View.search([
('inherit_id', '!=', False),
('active', '=', True)
])
xpath_pattern = re.compile(r'<xpath[^>]+expr=["\']([^"\']+)["\']')
orphan_xpaths = []
for view in inherited_views:
if not view.arch_db or not view.inherit_id or not view.inherit_id.arch_db:
continue
try:
# Get parent's combined arch (to handle chained inheritance)
parent_arch = view.inherit_id._get_combined_arch()
parent_tree = etree.fromstring(parent_arch)
except Exception:
# Parent view is already broken, skip
continue
# Parse child view
try:
view_tree = etree.fromstring(view.arch_db)
except Exception:
continue
# Find all xpath nodes
for xpath_node in view_tree.xpath('//xpath'):
expr = xpath_node.get('expr')
if not expr:
continue
try:
matches = parent_tree.xpath(expr)
if not matches:
orphan_xpaths.append({
'view_id': view.id,
'xml_id': view.xml_id or 'N/A',
'name': view.name,
'model': view.model,
'xpath': expr,
'parent_xml_id': view.inherit_id.xml_id or 'N/A',
'parent_id': view.inherit_id.id
})
except etree.XPathEvalError as e:
orphan_xpaths.append({
'view_id': view.id,
'xml_id': view.xml_id or 'N/A',
'name': view.name,
'model': view.model,
'xpath': expr,
'parent_xml_id': view.inherit_id.xml_id or 'N/A',
'parent_id': view.inherit_id.id,
'xpath_error': str(e)
})
self.stats['broken_xpath'] = len(orphan_xpaths)
if orphan_xpaths:
for item in orphan_xpaths:
error_msg = (
f"View '{item['xml_id']}' (ID: {item['view_id']}): "
f"XPath '{item['xpath']}' finds no target in parent '{item['parent_xml_id']}'"
)
if 'xpath_error' in item:
error_msg += f" (XPath syntax error: {item['xpath_error']})"
print_error(error_msg)
self.errors.append({
'type': 'orphan_xpath',
'severity': 'error',
**item
})
else:
print_ok("All XPath expressions find their targets")
def _validate_qweb_templates(self):
"""Validate QWeb templates can be rendered."""
print_subheader("3. Validating QWeb Templates")
qweb_views = self.View.search([
('type', '=', 'qweb'),
('active', '=', True)
])
self.stats['qweb_views'] = len(qweb_views)
print_info(f"Found {len(qweb_views)} QWeb templates to check")
broken = []
for view in qweb_views:
try:
# Basic XML parsing check
if view.arch_db:
etree.fromstring(view.arch_db)
# Try to get combined arch for inherited qweb views
if view.inherit_id:
view._get_combined_arch()
except Exception as e:
broken.append({
'view_id': view.id,
'xml_id': view.xml_id or 'N/A',
'name': view.name,
'key': view.key or 'N/A',
'error': str(e)[:200]
})
self.stats['broken_qweb'] = len(broken)
if broken:
for item in broken:
error_msg = (
f"QWeb template '{item['xml_id']}' (key: {item['key']}): {item['error']}"
)
print_error(error_msg)
self.errors.append({
'type': 'qweb_error',
'severity': 'error',
**item
})
else:
print_ok("All QWeb templates are valid")
def _validate_field_references(self):
"""Check that field references in views point to existing fields."""
print_subheader("4. Validating Field References")
field_pattern = re.compile(r'(?:name|field)=["\'](\w+)["\']')
missing_fields = []
# Only check form, tree, search, kanban views (not qweb)
views = self.View.search([
('type', 'in', ['form', 'tree', 'search', 'kanban', 'pivot', 'graph']),
('active', '=', True),
('model', '!=', False)
])
print_info(f"Checking field references in {len(views)} views")
checked_models = set()
for view in views:
model_name = view.model
if not model_name or model_name in checked_models:
continue
# Skip if model doesn't exist
if model_name not in self.env:
continue
checked_models.add(model_name)
try:
# Get combined arch
arch = view._get_combined_arch()
tree = etree.fromstring(arch)
except Exception:
continue
model = self.env[model_name]
model_fields = set(model._fields.keys())
# Find all field references
for field_node in tree.xpath('//*[@name]'):
field_name = field_node.get('name')
if not field_name:
continue
# Skip special names
if field_name in ('id', '__last_update', 'display_name'):
continue
# Skip if it's a button or action (not a field)
if field_node.tag in ('button', 'a'):
continue
# Check if field exists
if field_name not in model_fields:
# Check if it's a related field path (e.g., partner_id.name)
if '.' in field_name:
continue
missing_fields.append({
'view_id': view.id,
'xml_id': view.xml_id or 'N/A',
'model': model_name,
'field_name': field_name,
'tag': field_node.tag
})
self.stats['broken_fields'] = len(missing_fields)
if missing_fields:
# Group by view for cleaner output
by_view = defaultdict(list)
for item in missing_fields:
by_view[item['xml_id']].append(item['field_name'])
for xml_id, fields in list(by_view.items())[:20]: # Limit output
print_warn(f"View '{xml_id}': references missing fields: {', '.join(fields)}")
self.warnings.append({
'type': 'missing_field',
'severity': 'warning',
'xml_id': xml_id,
'fields': fields
})
if len(by_view) > 20:
print_warn(f"... and {len(by_view) - 20} more views with missing fields")
else:
print_ok("All field references are valid")
def _validate_odoo_native(self):
"""Run Odoo's native view validation."""
print_subheader("5. Running Odoo Native Validation")
try:
# This validates all custom views
self.View._validate_custom_views('all')
print_ok("Odoo native validation passed")
except Exception as e:
error_msg = f"Odoo native validation failed: {str(e)[:500]}"
print_error(error_msg)
self.errors.append({
'type': 'native_validation',
'severity': 'error',
'error': str(e)
})
def _check_assets(self):
"""Check for missing asset files."""
print_subheader("6. Checking Asset Files")
try:
IrAsset = self.env['ir.asset']
except KeyError:
print_info("ir.asset model not found (Odoo < 14.0), skipping asset check")
return
assets = IrAsset.search([])
print_info(f"Checking {len(assets)} asset definitions")
missing = []
for asset in assets:
if not asset.path:
continue
try:
# Try to resolve the asset path
# This is a simplified check - actual asset resolution is complex
path = asset.path
if path.startswith('/'):
path = path[1:]
# Check if it's a glob pattern or specific file
if '*' in path:
continue # Skip glob patterns
# Try to get the asset content (this will fail if file is missing)
# Note: This is environment dependent and may not catch all issues
except Exception as e:
missing.append({
'asset_id': asset.id,
'path': asset.path,
'bundle': asset.bundle or 'N/A',
'error': str(e)[:100]
})
self.stats['missing_assets'] = len(missing)
if missing:
for item in missing:
print_warn(f"Asset '{item['path']}' (bundle: {item['bundle']}): may be missing")
self.warnings.append({
'type': 'missing_asset',
'severity': 'warning',
**item
})
else:
print_ok("Asset definitions look valid")
def _print_summary(self):
"""Print validation summary."""
print_header("VALIDATION SUMMARY")
print(f"Statistics:")
print(f" - Total views checked: {self.stats['total_views']}")
print(f" - Inherited views: {self.stats['inherited_views']}")
print(f" - QWeb templates: {self.stats['qweb_views']}")
print()
print(f"Issues found:")
print(f" - Broken view combinations: {self.stats['broken_combine']}")
print(f" - Orphan XPath expressions: {self.stats['broken_xpath']}")
print(f" - Invalid QWeb templates: {self.stats['broken_qweb']}")
print(f" - Missing field references: {self.stats['broken_fields']}")
print(f" - Missing assets: {self.stats['missing_assets']}")
print()
total_errors = len(self.errors)
total_warnings = len(self.warnings)
if total_errors == 0 and total_warnings == 0:
print(f"{Colors.GREEN}{Colors.BOLD}")
print("="*60)
print(" ALL VALIDATIONS PASSED!")
print("="*60)
print(f"{Colors.END}")
elif total_errors == 0:
print(f"{Colors.YELLOW}{Colors.BOLD}")
print("="*60)
print(f" VALIDATION PASSED WITH {total_warnings} WARNING(S)")
print("="*60)
print(f"{Colors.END}")
else:
print(f"{Colors.RED}{Colors.BOLD}")
print("="*60)
print(f" VALIDATION FAILED: {total_errors} ERROR(S), {total_warnings} WARNING(S)")
print("="*60)
print(f"{Colors.END}")
if os.environ.get('VALIDATE_VIEWS_REPORT'):
report = {
'type': 'views',
'timestamp': datetime.now().isoformat(),
'database': self.env.cr.dbname,
'stats': self.stats,
'errors': self.errors,
'warnings': self.warnings
}
MARKER = '___VALIDATE_VIEWS_JSON___'
print(MARKER)
print(json.dumps(report, indent=2, default=str))
print(MARKER)
def main():
"""Main entry point."""
try:
validator = ViewValidator(env)
success = validator.validate_all()
# Exit with appropriate code
if not success:
sys.exit(1)
except Exception as e:
print_error(f"Validation script failed: {e}")
import traceback
traceback.print_exc()
sys.exit(2)
# Run when executed in Odoo shell
if __name__ == '__main__' or 'env' in dir():
main()

View File

@@ -1,110 +0,0 @@
#!/bin/bash
set -euo pipefail
# Global variables
ODOO_SERVICE="$1"
DB_NAME="$2"
DB_FINALE_MODEL="$3"
DB_FINALE_SERVICE="$4"
echo "Start database preparation"
# Check POSTGRES container is running
if ! docker ps | grep -q "$DB_CONTAINER_NAME"; then
printf "Docker container %s is not running.\n" "$DB_CONTAINER_NAME" >&2
return 1
fi
EXT_EXISTS=$(query_postgres_container "SELECT 1 FROM pg_extension WHERE extname = 'dblink'" "$DB_NAME") || exit 1
if [[ "$EXT_EXISTS" != "1" ]]; then
query_postgres_container "CREATE EXTENSION dblink;" "$DB_NAME" || exit 1
fi
# Neutralize the database
SQL_NEUTRALIZE=$(cat <<'EOF'
/* Archive all the mail servers */
UPDATE fetchmail_server SET active = false;
UPDATE ir_mail_server SET active = false;
/* Archive all the cron */
ALTER TABLE ir_cron ADD COLUMN IF NOT EXISTS active_bkp BOOLEAN;
UPDATE ir_cron SET active_bkp = active;
UPDATE ir_cron SET active = False;
EOF
)
echo "Neutralize base..."
query_postgres_container "$SQL_NEUTRALIZE" "$DB_NAME" || exit 1
echo "Base neutralized..."
#######################################
## List add-ons not in final version ##
#######################################
# Retrieve add-ons not available on the final Odoo version
SQL_404_ADDONS_LIST="
SELECT module_origin.name
FROM ir_module_module module_origin
LEFT JOIN (
SELECT *
FROM dblink('dbname=$FINALE_DB_NAME','SELECT name, shortdesc, author FROM ir_module_module')
AS tb2(name text, shortdesc text, author text)
) AS module_dest ON module_dest.name = module_origin.name
WHERE (module_dest.name IS NULL) AND (module_origin.state = 'installed') AND (module_origin.author NOT IN ('Odoo S.A.', 'Lokavaluto', 'Elabore'))
ORDER BY module_origin.name
;
"
echo "Retrieve 404 addons... "
echo "SQL REQUEST = $SQL_404_ADDONS_LIST"
query_postgres_container "$SQL_404_ADDONS_LIST" "$DB_NAME" > 404_addons || exit 1
# Keep only the installed add-ons
INSTALLED_ADDONS="SELECT name FROM ir_module_module WHERE state='installed';"
query_postgres_container "$INSTALLED_ADDONS" "$DB_NAME" > installed_addons || exit 1
grep -Fx -f 404_addons installed_addons > final_404_addons
rm -f 404_addons installed_addons
# Ask confirmation to uninstall the selected add-ons
echo "
==== ADD-ONS CHECK ====
Installed add-ons not available in final Odoo version:
"
cat final_404_addons
echo "
Do you accept to migrate the database with all these add-ons still installed? (Y/N/R)"
echo "Y - Yes, let's go on with the upgrade."
echo "N - No, stop the upgrade"
read -n 1 -p "Your choice: " choice
case "$choice" in
[Yy] ) echo "
Let's go on!";;
[Nn] ) echo "
Upgrade cancelled!"; exit 1;;
* ) echo "
Please answer by Y or N.";;
esac
# Check the views
PYTHON_SCRIPT=pre_migration_view_checking.py
echo "Check views with script $PYTHON_SCRIPT ..."
exec_python_script_in_odoo_shell "$DB_NAME" "$DB_NAME" "$PYTHON_SCRIPT" || exit 1
echo "
Do you accept to migrate the database with the current views states? (Y/N/R)"
echo "Y - Yes, let's go on with the upgrade."
echo "N - No, stop the upgrade"
read -n 1 -p "Your choice: " choice
case "$choice" in
[Yy] ) echo "
Upgrade confirmed!";;
[Nn] ) echo "
Upgrade cancelled!"; exit 1;;
* ) echo "
Please answer by Y or N.";;
esac
echo "Database successfully prepared!"

60
scripts/finalize_db.sh Executable file
View File

@@ -0,0 +1,60 @@
#!/bin/bash
set -euo pipefail
DB_NAME="$1"
ODOO_SERVICE="$2"
echo "Running SQL cleanup..."
CLEANUP_SQL=$(cat <<'EOF'
-- Drop sequences that prevent Odoo from starting.
-- These sequences are recreated by Odoo on startup but stale values
-- from the old version can cause conflicts.
DROP SEQUENCE IF EXISTS base_registry_signaling;
DROP SEQUENCE IF EXISTS base_cache_signaling;
-- Reset website templates to their original state.
-- Views with arch_fs (file source) that have been customized (arch_db not null)
-- are reset to use the file version, EXCEPT for actual website pages which
-- contain user content that must be preserved.
UPDATE ir_ui_view
SET arch_db = NULL
WHERE arch_fs IS NOT NULL
AND arch_fs LIKE 'website/%'
AND arch_db IS NOT NULL
AND id NOT IN (SELECT view_id FROM website_page);
-- Purge compiled frontend assets (CSS/JS bundles).
-- These cached files reference old asset versions and must be regenerated
-- by Odoo after migration to avoid broken stylesheets and scripts.
DELETE FROM ir_attachment
WHERE name LIKE '/web/assets/%'
OR name LIKE '%.assets_%'
OR (res_model = 'ir.ui.view' AND mimetype = 'text/css');
EOF
)
query_postgres_container "$CLEANUP_SQL" "$DB_NAME"
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)"
PYTHON_SCRIPT="${SCRIPT_DIR}/lib/python/fix_duplicated_views.py"
echo "Remove duplicated views with script $PYTHON_SCRIPT ..."
exec_python_script_in_odoo_shell "$DB_NAME" "$DB_NAME" "$PYTHON_SCRIPT"
PYTHON_SCRIPT="${SCRIPT_DIR}/lib/python/cleanup_modules.py"
echo "Uninstall obsolete add-ons with script $PYTHON_SCRIPT ..."
exec_python_script_in_odoo_shell "$DB_NAME" "$DB_NAME" "$PYTHON_SCRIPT"
# Give back the right to user to access to the tables
# docker exec -u 70 "$DB_CONTAINER_NAME" pgm chown "$FINALE_SERVICE_NAME" "$DB_NAME"
# Launch Odoo with database in finale version to run all updates
run_compose --debug run "$ODOO_SERVICE" -u all --log-level=debug --stop-after-init --no-http
echo ""
echo "Running post-migration view validation..."
if exec_python_script_in_odoo_shell "$DB_NAME" "$DB_NAME" "${SCRIPT_DIR}/lib/python/validate_views.py"; then
echo "View validation passed."
else
echo "WARNING: View validation found issues. Run scripts/validate_migration.sh for the full report."
fi

74
scripts/prepare_db.sh Executable file
View File

@@ -0,0 +1,74 @@
#!/bin/bash
set -euo pipefail
ODOO_SERVICE="$1"
DB_NAME="$2"
DB_FINALE_MODEL="$3"
DB_FINALE_SERVICE="$4"
TMPDIR=$(mktemp -d)
trap 'rm -rf "$TMPDIR"' EXIT
echo "Start database preparation"
# Check POSTGRES container is running
if ! docker ps | grep -q "$POSTGRES_SERVICE_NAME"; then
printf "Docker container %s is not running.\n" "$POSTGRES_SERVICE_NAME" >&2
exit 1
fi
EXT_EXISTS=$(query_postgres_container "SELECT 1 FROM pg_extension WHERE extname = 'dblink'" "$DB_NAME") || exit 1
if [[ "$EXT_EXISTS" != "1" ]]; then
query_postgres_container "CREATE EXTENSION dblink;" "$DB_NAME" || exit 1
fi
# Neutralize the database
SQL_NEUTRALIZE=$(cat <<'EOF'
/* Archive all the mail servers */
UPDATE fetchmail_server SET active = false;
UPDATE ir_mail_server SET active = false;
/* Archive all the cron */
ALTER TABLE ir_cron ADD COLUMN IF NOT EXISTS active_bkp BOOLEAN;
UPDATE ir_cron SET active_bkp = active;
UPDATE ir_cron SET active = False;
EOF
)
echo "Neutralize base..."
query_postgres_container "$SQL_NEUTRALIZE" "$DB_NAME" || exit 1
echo "Base neutralized..."
#######################################
## List add-ons not in final version ##
#######################################
SQL_MISSING_ADDONS=$(cat <<EOF
SELECT module_origin.name
FROM ir_module_module module_origin
LEFT JOIN (
SELECT *
FROM dblink('dbname=${FINALE_DB_NAME}','SELECT name, shortdesc, author FROM ir_module_module')
AS tb2(name text, shortdesc text, author text)
) AS module_dest ON module_dest.name = module_origin.name
WHERE (module_dest.name IS NULL)
AND (module_origin.state = 'installed')
AND (module_origin.author NOT IN ('Odoo S.A.'))
ORDER BY module_origin.name;
EOF
)
echo "Retrieve missing addons..."
missing_addons=$(query_postgres_container "$SQL_MISSING_ADDONS" "$DB_NAME")
log_step "ADD-ONS CHECK"
echo "Installed add-ons not available in final Odoo version:"
echo "$missing_addons"
confirm_or_exit "Do you accept to migrate with these add-ons still installed?"
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)"
PYTHON_SCRIPT="${SCRIPT_DIR}/lib/python/check_views.py"
echo "Check views with script $PYTHON_SCRIPT ..."
exec_python_script_in_odoo_shell "$DB_NAME" "$DB_NAME" "$PYTHON_SCRIPT"
confirm_or_exit "Do you accept to migrate with the current views state?"
echo "Database successfully prepared!"

138
scripts/validate_migration.sh Executable file
View File

@@ -0,0 +1,138 @@
#!/bin/bash
#
# Post-Migration Validation Script for Odoo
# Validates views, XPath expressions, and QWeb templates.
#
# View validation runs automatically at the end of the upgrade process.
# This script can also be run manually for the full report with JSON output.
#
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
source "${PROJECT_ROOT}/lib/common.sh"
####################
# CONFIGURATION
####################
REPORT_DIR="/tmp"
REPORT_TIMESTAMP=$(date +%Y%m%d_%H%M%S)
VIEWS_REPORT=""
VIEWS_REPORT_MARKER="___VALIDATE_VIEWS_JSON___"
####################
# USAGE
####################
usage() {
cat <<EOF
Usage: $0 <db_name> <service_name>
Post-migration view validation for Odoo databases.
Validates:
- Inherited view combination (parent + child)
- XPath expressions find their targets
- QWeb template syntax
- Field references point to existing fields
- Odoo native view validation
Arguments:
db_name Name of the database to validate
service_name Docker compose service name (e.g., odoo17, ou17)
Examples:
$0 ou17 odoo17
$0 elabore_migrated odoo18
Notes:
- Runs via Odoo shell (no HTTP server needed)
- Report is written to /tmp/validation_views_<db>_<timestamp>.json
EOF
exit 1
}
####################
# ARGUMENT PARSING
####################
DB_NAME=""
SERVICE_NAME=""
while [[ $# -gt 0 ]]; do
case "$1" in
-h|--help)
usage
;;
*)
if [[ -z "$DB_NAME" ]]; then
DB_NAME="$1"
shift
elif [[ -z "$SERVICE_NAME" ]]; then
SERVICE_NAME="$1"
shift
else
log_error "Unexpected argument: $1"
usage
fi
;;
esac
done
if [[ -z "$DB_NAME" ]]; then
log_error "Missing database name"
usage
fi
if [[ -z "$SERVICE_NAME" ]]; then
log_error "Missing service name"
usage
fi
####################
# MAIN
####################
log_step "POST-MIGRATION VIEW VALIDATION"
log_info "Database: $DB_NAME"
log_info "Service: $SERVICE_NAME"
PYTHON_SCRIPT="${PROJECT_ROOT}/lib/python/validate_views.py"
if [[ ! -f "$PYTHON_SCRIPT" ]]; then
log_error "Validation script not found: $PYTHON_SCRIPT"
exit 1
fi
VIEWS_REPORT="${REPORT_DIR}/validation_views_${DB_NAME}_${REPORT_TIMESTAMP}.json"
log_info "Running view validation in Odoo shell..."
echo ""
RESULT=0
RAW_OUTPUT=$(run_compose run --rm -e VALIDATE_VIEWS_REPORT=1 "$SERVICE_NAME" shell -d "$DB_NAME" --no-http --stop-after-init < "$PYTHON_SCRIPT") || RESULT=$?
echo "$RAW_OUTPUT" | sed "/${VIEWS_REPORT_MARKER}/,/${VIEWS_REPORT_MARKER}/d"
echo "$RAW_OUTPUT" | sed -n "/${VIEWS_REPORT_MARKER}/,/${VIEWS_REPORT_MARKER}/p" | grep -v "$VIEWS_REPORT_MARKER" > "$VIEWS_REPORT"
echo ""
log_step "VALIDATION COMPLETE"
if [[ -s "$VIEWS_REPORT" ]]; then
log_info "Report: $VIEWS_REPORT"
else
log_warn "Could not extract validation report from output"
VIEWS_REPORT=""
fi
if [[ $RESULT -eq 0 ]]; then
log_info "All validations passed!"
else
log_error "Some validations failed. Check the output above for details."
fi
exit $RESULT

View File

@@ -1,6 +1,9 @@
#!/bin/bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "${SCRIPT_DIR}/lib/common.sh"
####################
# USAGE & ARGUMENTS
####################
@@ -22,225 +25,115 @@ EOF
}
if [[ $# -lt 4 ]]; then
echo "ERROR: Missing arguments. Expected 4, got $#." >&2
log_error "Missing arguments. Expected 4, got $#."
usage
fi
####################
# GLOBAL VARIABLES #
####################
check_required_commands
ORIGIN_VERSION="$1" # "12" for version 12.0
FINAL_VERSION="$2" # "16" for version 16.0
# Path to the database to migrate. Must be a .zip file with the following syntax: {DATABASE_NAME}.zip
ORIGIN_DB_NAME="$3"
ORIGIN_SERVICE_NAME="$4"
readonly ORIGIN_VERSION="$1"
readonly FINAL_VERSION="$2"
readonly ORIGIN_DB_NAME="$3"
readonly ORIGIN_SERVICE_NAME="$4"
# Get origin database name
COPY_DB_NAME="ou${ORIGIN_VERSION}"
# Define finale database name
readonly COPY_DB_NAME="ou${ORIGIN_VERSION}"
export FINALE_DB_NAME="ou${FINAL_VERSION}"
# Define finale odoo service name
FINALE_SERVICE_NAME="${FINALE_DB_NAME}"
readonly FINALE_DB_NAME
readonly FINALE_SERVICE_NAME="${FINALE_DB_NAME}"
# Service postgres name (dynamically retrieved from running containers)
POSTGRES_CONTAINERS=$(docker ps --format '{{.Names}}' | grep postgres)
POSTGRES_COUNT=$(echo "$POSTGRES_CONTAINERS" | grep -c .)
readarray -t postgres_containers < <(docker ps --format '{{.Names}}' | grep postgres || true)
if [[ "$POSTGRES_COUNT" -eq 0 ]]; then
echo "ERROR: No running PostgreSQL container found. Please start a PostgreSQL container and try again." >&2
if [[ ${#postgres_containers[@]} -eq 0 ]]; then
log_error "No running PostgreSQL container found. Please start a PostgreSQL container and try again."
exit 1
elif [[ "$POSTGRES_COUNT" -gt 1 ]]; then
echo "ERROR: Multiple PostgreSQL containers found:" >&2
echo "$POSTGRES_CONTAINERS" >&2
echo "Please ensure only one PostgreSQL container is running." >&2
elif [[ ${#postgres_containers[@]} -gt 1 ]]; then
log_error "Multiple PostgreSQL containers found:"
printf ' %s\n' "${postgres_containers[@]}" >&2
log_error "Please ensure only one PostgreSQL container is running."
exit 1
fi
export POSTGRES_SERVICE_NAME="$POSTGRES_CONTAINERS"
export POSTGRES_SERVICE_NAME="${postgres_containers[0]}"
readonly POSTGRES_SERVICE_NAME
#############################################
# DISPLAYS ALL INPUTS PARAMETERS
#############################################
log_step "INPUT PARAMETERS"
log_info "Origin version .......... $ORIGIN_VERSION"
log_info "Final version ........... $FINAL_VERSION"
log_info "Origin DB name ........... $ORIGIN_DB_NAME"
log_info "Origin service name ..... $ORIGIN_SERVICE_NAME"
echo "===== INPUT PARAMETERS ====="
echo "Origin version .......... $ORIGIN_VERSION"
echo "Final version ........... $FINAL_VERSION"
echo "Origin DB name ........... $ORIGIN_DB_NAME"
echo "Origin service name ..... $ORIGIN_SERVICE_NAME"
echo "
===== COMPUTED GLOBALE VARIABLES ====="
echo "Copy DB name ............. $COPY_DB_NAME"
echo "Finale DB name ........... $FINALE_DB_NAME"
echo "Finale service name ...... $FINALE_SERVICE_NAME"
echo "Postgres service name .... $POSTGRES_SERVICE_NAME"
log_step "COMPUTED GLOBAL VARIABLES"
log_info "Copy DB name ............. $COPY_DB_NAME"
log_info "Finale DB name ........... $FINALE_DB_NAME"
log_info "Finale service name ...... $FINALE_SERVICE_NAME"
log_info "Postgres service name .... $POSTGRES_SERVICE_NAME"
# Function to launch an SQL request to the postgres container
query_postgres_container(){
local QUERY="$1"
local DB_NAME="$2"
if [[ -z "$QUERY" ]]; then
return 0
fi
local result
if ! result=$(docker exec -u 70 "$POSTGRES_SERVICE_NAME" psql -d "$DB_NAME" -t -A -c "$QUERY"); then
printf "Failed to execute SQL query: %s\n" "$query" >&2
printf "Error: %s\n" "$result" >&2
exit 1
fi
echo "$result"
}
export -f query_postgres_container
log_step "CHECKS ALL NEEDED COMPONENTS ARE AVAILABLE"
# Function to copy the postgres databases
copy_database(){
local FROM_DB="$1"
local TO_SERVICE="$2"
local TO_DB="$3"
docker exec -u 70 "$POSTGRES_SERVICE_NAME" pgm cp -f "$FROM_DB" "$TO_DB"@"$TO_SERVICE"
}
export -f copy_database
# Function to copy the filetores
copy_filestore(){
local FROM_SERVICE="$1"
local FROM_DB="$2"
local TO_SERVICE="$3"
local TO_DB="$4"
sudo mkdir -p /srv/datastore/data/"$TO_SERVICE"/var/lib/odoo/filestore/"$TO_DB" || exit 1
sudo rm -rf /srv/datastore/data/"$TO_SERVICE"/var/lib/odoo/filestore/"$TO_DB" || exit 1
sudo cp -a /srv/datastore/data/"$FROM_SERVICE"/var/lib/odoo/filestore/"$FROM_DB" /srv/datastore/data/"$TO_SERVICE"/var/lib/odoo/filestore/"$TO_DB" || exit 1
echo "Filestore $FROM_SERVICE/$FROM_DB copied."
}
export -f copy_filestore
# Function to launch python scripts in Odoo Shell
exec_python_script_in_odoo_shell(){
local SERVICE_NAME="$1"
local DB_NAME="$2"
local PYTHON_SCRIPT="$3"
compose --debug run "$SERVICE_NAME" shell -d "$DB_NAME" --no-http --stop-after-init < "$PYTHON_SCRIPT"
}
export -f exec_python_script_in_odoo_shell
##############################################
# CHECKS ALL NEEDED COMPONENTS ARE AVAILABLE #
##############################################
echo "
==== CHECKS ALL NEEDED COMPONENTS ARE AVAILABLE ===="
# Check origin database is in the local postgres
DB_EXISTS=$(docker exec -it -u 70 "$POSTGRES_SERVICE_NAME" psql -tc "SELECT 1 FROM pg_database WHERE datname = '$ORIGIN_DB_NAME'" | tr -d '[:space:]')
if [[ "$DB_EXISTS" ]]; then
echo "UPGRADE: Database '$ORIGIN_DB_NAME' found."
db_exists=$(docker exec -it -u 70 "$POSTGRES_SERVICE_NAME" psql -tc "SELECT 1 FROM pg_database WHERE datname = '$ORIGIN_DB_NAME'" | tr -d '[:space:]')
if [[ "$db_exists" ]]; then
log_info "Database '$ORIGIN_DB_NAME' found."
else
echo "ERROR: Database '$ORIGIN_DB_NAME' not found in the local postgress service. Please add it and restart the upgrade process."
log_error "Database '$ORIGIN_DB_NAME' not found in the local postgres service. Please add it and restart the upgrade process."
exit 1
fi
# Check that the origin filestore exist
REPERTOIRE="/srv/datastore/data/${ORIGIN_SERVICE_NAME}/var/lib/odoo/filestore/${ORIGIN_DB_NAME}"
if [[ -d "$REPERTOIRE" ]]; then
echo "UPGRADE: '$REPERTOIRE' filestore found."
filestore_path="${DATASTORE_PATH}/${ORIGIN_SERVICE_NAME}/${FILESTORE_SUBPATH}/${ORIGIN_DB_NAME}"
if [[ -d "$filestore_path" ]]; then
log_info "Filestore '$filestore_path' found."
else
echo "ERROR: '$REPERTOIRE' filestore not found, please add it and restart the upgrade process."
log_error "Filestore '$filestore_path' not found, please add it and restart the upgrade process."
exit 1
fi
#######################################
# LAUNCH VIRGIN ODOO IN FINAL VERSION #
#######################################
log_step "LAUNCH VIRGIN ODOO IN FINAL VERSION"
# Remove finale database and datastore if already exists (we need a virgin Odoo)
if docker exec -u 70 "$POSTGRES_SERVICE_NAME" pgm ls | grep -q "$FINALE_SERVICE_NAME"; then
if docker exec -u 70 "$POSTGRES_SERVICE_NAME" pgm ls | grep "$FINALE_SERVICE_NAME"; then
log_info "Removing existing finale database and filestore..."
docker exec -u 70 "$POSTGRES_SERVICE_NAME" pgm rm -f "$FINALE_SERVICE_NAME"
sudo rm -rf /srv/datastore/data/"$FINALE_SERVICE_NAME"/var/lib/odoo/filestore/"$FINALE_SERVICE_NAME"
sudo rm -rf "${DATASTORE_PATH}/${FINALE_SERVICE_NAME}/${FILESTORE_SUBPATH}/${FINALE_SERVICE_NAME}"
fi
compose --debug run "$FINALE_SERVICE_NAME" -i base --stop-after-init --no-http
run_compose --debug run "$FINALE_SERVICE_NAME" -i base --stop-after-init --no-http
echo "Model database in final Odoo version created."
log_info "Model database in final Odoo version created."
############################
# COPY ORIGINAL COMPONENTS #
############################
log_step "COPY ORIGINAL COMPONENTS"
echo "
==== COPY ORIGINAL COMPONENTS ===="
echo "UPGRADE: Start copy"
copy_database "$ORIGIN_DB_NAME" "$COPY_DB_NAME" "$COPY_DB_NAME"
log_info "Original database copied to ${COPY_DB_NAME}@${COPY_DB_NAME}."
# Copy database
copy_database "$ORIGIN_DB_NAME" "$COPY_DB_NAME" "$COPY_DB_NAME" || exit 1
echo "UPGRADE: original database copied in ${COPY_DB_NAME}@${COPY_DB_NAME}."
# Copy filestore
copy_filestore "$ORIGIN_SERVICE_NAME" "$ORIGIN_DB_NAME" "$COPY_DB_NAME" "$COPY_DB_NAME" || exit 1
echo "UPGRADE: original filestore copied."
copy_filestore "$ORIGIN_SERVICE_NAME" "$ORIGIN_DB_NAME" "$COPY_DB_NAME" "$COPY_DB_NAME"
log_info "Original filestore copied."
#####################
# PATH OF MIGRATION #
####################
log_step "PATH OF MIGRATION"
echo "
==== PATH OF MIGRATION ===="
# List all the versions to migrate through
declare -a versions
nb_migrations=$((FINAL_VERSION - ORIGIN_VERSION))
readarray -t versions < <(seq $((ORIGIN_VERSION + 1)) "$FINAL_VERSION")
log_info "Migration path is ${versions[*]}"
# Build the migration path
for ((i = 0; i < nb_migrations; i++)); do
versions[i]=$((ORIGIN_VERSION + 1 + i))
log_step "DATABASE PREPARATION"
"${SCRIPT_DIR}/scripts/prepare_db.sh" "$COPY_DB_NAME" "$COPY_DB_NAME" "$FINALE_DB_NAME" "$FINALE_SERVICE_NAME"
log_step "UPGRADE PROCESS"
for version in "${versions[@]}"; do
log_info "START UPGRADE TO ${version}.0"
"${SCRIPT_DIR}/versions/${version}.0/pre_upgrade.sh"
"${SCRIPT_DIR}/versions/${version}.0/upgrade.sh"
"${SCRIPT_DIR}/versions/${version}.0/post_upgrade.sh"
log_info "END UPGRADE TO ${version}.0"
done
echo "UPGRADE: Migration path is ${versions[@]}"
log_step "POST-UPGRADE PROCESSES"
########################
# DATABASE PREPARATION #
########################
"${SCRIPT_DIR}/scripts/finalize_db.sh" "$FINALE_DB_NAME" "$FINALE_SERVICE_NAME"
echo "
==== DATABASE PREPARATION ===="
./prepare_db.sh "$COPY_DB_NAME" "$COPY_DB_NAME" "$FINALE_DB_MODEL_NAME" "$FINALE_SERVICE_NAME" || exit 1
###################
# UPGRADE PROCESS #
###################
for version in "${versions[@]}"
do
echo "START UPGRADE TO ${version}.0"
start_version=$((version-1))
end_version="$version"
### Go to the repository holding the upgrate scripts
cd "${end_version}.0"
### Execute pre_upgrade scripts
./pre_upgrade.sh || exit 1
### Start upgrade
./upgrade.sh || exit 1
### Execute post-upgrade scripts
./post_upgrade.sh || exit 1
### Return to parent repository for the following steps
cd ..
echo "END UPGRADE TO ${version}.0"
done
## END UPGRADE LOOP
##########################
# POST-UPGRADE PROCESSES #
##########################
./finalize_db.sh "$FINALE_DB_NAME" "$FINALE_SERVICE_NAME" || exit 1
echo "UPGRADE PROCESS ENDED WITH SUCCESS"
log_step "UPGRADE PROCESS ENDED WITH SUCCESS"

4
versions/13.0/upgrade.sh Executable file
View File

@@ -0,0 +1,4 @@
#!/bin/bash
set -euo pipefail
run_compose run -p 8013:8069 ou13 --config=/opt/odoo/auto/odoo.conf --stop-after-init -u all --workers 0 --log-level=debug --max-cron-threads=0 --limit-time-real=10000 --database=ou13

4
versions/14.0/upgrade.sh Executable file
View File

@@ -0,0 +1,4 @@
#!/bin/bash
set -euo pipefail
run_compose run -p 8014:8069 ou14 --config=/opt/odoo/auto/odoo.conf --stop-after-init -u all --workers 0 --log-level=debug --max-cron-threads=0 --limit-time-real=10000 --database=ou14 --load=base,web,openupgrade_framework

4
versions/15.0/upgrade.sh Executable file
View File

@@ -0,0 +1,4 @@
#!/bin/bash
set -euo pipefail
run_compose run -p 8015:8069 ou15 --config=/opt/odoo/auto/odoo.conf --stop-after-init -u all --workers 0 --log-level=debug --max-cron-threads=0 --limit-time-real=10000 --database=ou15 --load=base,web,openupgrade_framework

4
versions/16.0/upgrade.sh Executable file
View File

@@ -0,0 +1,4 @@
#!/bin/bash
set -euo pipefail
run_compose run -p 8016:8069 ou16 --config=/opt/odoo/auto/odoo.conf --stop-after-init -u all --workers 0 --log-level=debug --max-cron-threads=0 --limit-time-real=10000 --database=ou16 --load=base,web,openupgrade_framework

4
versions/17.0/upgrade.sh Executable file
View File

@@ -0,0 +1,4 @@
#!/bin/bash
set -euo pipefail
run_compose run -p 8017:8069 ou17 --config=/opt/odoo/auto/odoo.conf --stop-after-init -u all --workers 0 --log-level=debug --max-cron-threads=0 --limit-time-real=10000 --database=ou17 --load=base,web,openupgrade_framework

160
versions/18.0/post_upgrade.sh Executable file
View File

@@ -0,0 +1,160 @@
#!/bin/bash
set -euo pipefail
echo "Post migration to 18.0..."
# ============================================================================
# BANK-PAYMENT -> BANK-PAYMENT-ALTERNATIVE DATA MIGRATION
# Source PR: https://github.com/OCA/bank-payment-alternative/pull/42
# ============================================================================
BANK_PAYMENT_POST_SQL=$(cat <<'EOF'
DO $$
DECLARE
mode_rec RECORD;
new_line_id INTEGER;
journal_rec RECORD;
BEGIN
IF NOT EXISTS (SELECT FROM information_schema.tables WHERE table_name = 'account_payment_mode') THEN
RAISE NOTICE 'No account_payment_mode table found, skipping bank-payment migration';
RETURN;
END IF;
RAISE NOTICE 'Starting bank-payment to bank-payment-alternative migration...';
ALTER TABLE account_payment_method_line
ADD COLUMN IF NOT EXISTS old_payment_mode_id INT,
ADD COLUMN IF NOT EXISTS old_refund_payment_mode_id INT;
FOR mode_rec IN
SELECT id, name, company_id, payment_method_id,
fixed_journal_id AS journal_id, bank_account_link,
create_date, create_uid, write_date, write_uid,
show_bank_account, refund_payment_mode_id, active
FROM account_payment_mode
LOOP
INSERT INTO account_payment_method_line (
name, payment_method_id, bank_account_link, journal_id,
selectable, company_id, create_uid, create_date,
write_uid, write_date, show_bank_account,
old_payment_mode_id, old_refund_payment_mode_id, active
) VALUES (
to_jsonb(mode_rec.name),
mode_rec.payment_method_id,
mode_rec.bank_account_link,
mode_rec.journal_id,
true,
mode_rec.company_id,
mode_rec.create_uid,
mode_rec.create_date,
mode_rec.write_uid,
mode_rec.write_date,
mode_rec.show_bank_account,
mode_rec.id,
mode_rec.refund_payment_mode_id,
mode_rec.active
) RETURNING id INTO new_line_id;
IF mode_rec.bank_account_link = 'variable' THEN
FOR journal_rec IN
SELECT rel.journal_id
FROM account_payment_mode_variable_journal_rel rel
WHERE rel.payment_mode_id = mode_rec.id
LOOP
INSERT INTO account_payment_method_line_journal_rel
(account_payment_method_line_id, account_journal_id)
VALUES (new_line_id, journal_rec.journal_id)
ON CONFLICT DO NOTHING;
END LOOP;
END IF;
RAISE NOTICE 'Migrated payment mode % -> payment method line %', mode_rec.id, new_line_id;
END LOOP;
UPDATE account_payment_method_line apml
SET refund_payment_method_line_id = apml2.id
FROM account_payment_method_line apml2
WHERE apml.old_refund_payment_mode_id IS NOT NULL
AND apml.old_refund_payment_mode_id = apml2.old_payment_mode_id;
UPDATE account_move am
SET preferred_payment_method_line_id = apml.id
FROM account_payment_mode apm, account_payment_method_line apml
WHERE am.payment_mode_id = apm.id
AND apm.id = apml.old_payment_mode_id
AND am.preferred_payment_method_line_id IS NULL;
RAISE NOTICE 'account_payment_base_oca migration completed';
END $$;
EOF
)
echo "Executing bank-payment base migration..."
query_postgres_container "$BANK_PAYMENT_POST_SQL" ou18 || exit 1
BANK_PAYMENT_BATCH_SQL=$(cat <<'EOF'
DO $$
BEGIN
IF NOT EXISTS (SELECT FROM information_schema.tables WHERE table_name = 'account_payment_mode') THEN
RETURN;
END IF;
IF NOT EXISTS (SELECT FROM information_schema.tables WHERE table_name = 'account_payment_order') THEN
RAISE NOTICE 'No account_payment_order table, skipping batch migration';
RETURN;
END IF;
RAISE NOTICE 'Starting account_payment_batch_oca migration...';
IF EXISTS (SELECT FROM information_schema.columns
WHERE table_name = 'account_payment_method' AND column_name = 'payment_order_only') THEN
UPDATE account_payment_method
SET payment_order_ok = payment_order_only
WHERE payment_order_only IS NOT NULL;
END IF;
UPDATE account_payment_method_line apml
SET payment_order_ok = apm.payment_order_ok,
no_debit_before_maturity = apm.no_debit_before_maturity,
default_payment_mode = apm.default_payment_mode,
default_invoice = apm.default_invoice,
default_target_move = apm.default_target_move,
default_date_type = apm.default_date_type,
default_date_prefered = apm.default_date_prefered,
group_lines = apm.group_lines
FROM account_payment_mode apm
WHERE apml.old_payment_mode_id IS NOT NULL
AND apm.id = apml.old_payment_mode_id;
IF EXISTS (SELECT FROM information_schema.tables
WHERE table_name = 'account_journal_account_payment_method_line_rel') THEN
DELETE FROM account_journal_account_payment_method_line_rel
WHERE account_payment_method_line_id IN (
SELECT id FROM account_payment_method_line WHERE old_payment_mode_id IS NOT NULL
);
INSERT INTO account_journal_account_payment_method_line_rel
(account_payment_method_line_id, account_journal_id)
SELECT apml.id, rel.account_journal_id
FROM account_journal_account_payment_mode_rel rel
JOIN account_payment_method_line apml ON rel.account_payment_mode_id = apml.old_payment_mode_id
ON CONFLICT DO NOTHING;
END IF;
UPDATE account_payment_order apo
SET payment_method_line_id = apml.id,
payment_method_code = apm_method.code
FROM account_payment_method_line apml,
account_payment_mode apm,
account_payment_method apm_method
WHERE apo.payment_mode_id = apm.id
AND apml.old_payment_mode_id = apm.id
AND apm_method.id = apml.payment_method_id;
RAISE NOTICE 'account_payment_batch_oca migration completed';
RAISE NOTICE 'NOTE: Payment lots for open orders must be generated manually via Odoo UI or script';
END $$;
EOF
)
echo "Executing bank-payment batch migration..."
query_postgres_container "$BANK_PAYMENT_BATCH_SQL" ou18 || exit 1
echo "Post migration to 18.0 completed!"

97
versions/18.0/pre_upgrade.sh Executable file
View File

@@ -0,0 +1,97 @@
#!/bin/bash
set -euo pipefail
echo "Prepare migration to 18.0..."
# Copy database
copy_database ou17 ou18 ou18 || exit 1
# ============================================================================
# BANK-PAYMENT -> BANK-PAYMENT-ALTERNATIVE MODULE RENAMING
# Migration from OCA/bank-payment to OCA/bank-payment-alternative
# Source PR: https://github.com/OCA/bank-payment-alternative/pull/42
#
# This renaming MUST be done BEFORE OpenUpgrade runs, so that the migration
# scripts in the new modules (account_payment_base_oca, account_payment_batch_oca)
# can properly migrate the data.
# ============================================================================
BANK_PAYMENT_RENAME_SQL=$(cat <<'EOF'
DO $$
DECLARE
renamed_modules TEXT[][] := ARRAY[
['account_payment_mode', 'account_payment_base_oca'],
['account_banking_pain_base', 'account_payment_sepa_base'],
['account_banking_sepa_credit_transfer', 'account_payment_sepa_credit_transfer'],
['account_payment_order', 'account_payment_batch_oca']
];
merged_modules TEXT[][] := ARRAY[
['account_payment_partner', 'account_payment_base_oca']
];
old_name TEXT;
new_name TEXT;
old_module_id INTEGER;
deleted_count INTEGER;
BEGIN
FOR i IN 1..array_length(renamed_modules, 1) LOOP
old_name := renamed_modules[i][1];
new_name := renamed_modules[i][2];
SELECT id INTO old_module_id FROM ir_module_module WHERE name = old_name;
IF old_module_id IS NOT NULL THEN
RAISE NOTICE 'Renaming module: % -> %', old_name, new_name;
UPDATE ir_module_module SET name = new_name WHERE name = old_name;
UPDATE ir_model_data SET module = new_name WHERE module = old_name;
UPDATE ir_module_module_dependency SET name = new_name WHERE name = old_name;
END IF;
END LOOP;
FOR i IN 1..array_length(merged_modules, 1) LOOP
old_name := merged_modules[i][1];
new_name := merged_modules[i][2];
SELECT id INTO old_module_id FROM ir_module_module WHERE name = old_name;
IF old_module_id IS NOT NULL THEN
RAISE NOTICE 'Merging module: % -> %', old_name, new_name;
DELETE FROM ir_model_data
WHERE module = old_name
AND name IN (SELECT name FROM ir_model_data WHERE module = new_name);
GET DIAGNOSTICS deleted_count = ROW_COUNT;
IF deleted_count > 0 THEN
RAISE NOTICE ' Deleted % duplicate ir_model_data records', deleted_count;
END IF;
UPDATE ir_model_data SET module = new_name WHERE module = old_name;
UPDATE ir_module_module_dependency SET name = new_name WHERE name = old_name;
UPDATE ir_module_module SET state = 'uninstalled' WHERE name = old_name;
DELETE FROM ir_module_module WHERE name = old_name;
END IF;
END LOOP;
END $$;
EOF
)
echo "Executing bank-payment module renaming..."
query_postgres_container "$BANK_PAYMENT_RENAME_SQL" ou18 || exit 1
BANK_PAYMENT_PRE_SQL=$(cat <<'EOF'
UPDATE ir_model_data
SET noupdate = false
WHERE module = 'account_payment_base_oca'
AND name = 'view_account_invoice_report_search';
EOF
)
echo "Executing bank-payment pre-migration..."
query_postgres_container "$BANK_PAYMENT_PRE_SQL" ou18 || exit 1
# Execute SQL pre-migration commands
PRE_MIGRATE_SQL=$(cat <<'EOF'
UPDATE account_analytic_plan SET default_applicability=NULL WHERE default_applicability='optional';
EOF
)
echo "SQL command = $PRE_MIGRATE_SQL"
query_postgres_container "$PRE_MIGRATE_SQL" ou18 || exit 1
# Copy filestores
copy_filestore ou17 ou17 ou18 ou18 || exit 1
echo "Ready for migration to 18.0!"

4
versions/18.0/upgrade.sh Executable file
View File

@@ -0,0 +1,4 @@
#!/bin/bash
set -euo pipefail
run_compose run -p 8018:8069 ou18 --config=/opt/odoo/auto/odoo.conf --stop-after-init -u all --workers 0 --log-level=debug --max-cron-threads=0 --limit-time-real=10000 --database=ou18 --load=base,web,openupgrade_framework