@@ 0,0 1,205 @@
+# Database migration
+
+This document contains all the required information regarding database structure, migrations and how to use database
+migration tool.
+
+## Overview
+
+Here's an example of a database directory structure:
+
+```
+products/<product>/services/db/databases/
+└── migration
+ ├── db_1
+ │ └── 0
+ ├── db_2
+ │ ├── 0
+ │ ├── 1
+ │ ├── 2
+ │ └── current
+ │ ├── 023jh23a_changed_table
+ │ └── 11k4eff1_new_field
+ ├── db_3
+ │ └── 0
+ ├── db_4
+ │ ├── 0
+ │ └── 1
+ ├── db_5
+ │ ├── 0
+ │ └── current
+ │ ├── 65a9ee3e_add_new_table
+ │ └── 34a4e212_add_new_field
+ ├── db_6
+ │ └── 0
+ ├── db_7
+ │ ├── 0
+ │ └── 1
+ ├── db_8
+ │ └── 0
+ └── db_n
+ └── 0
+```
+
+Each database has its directory entry. Then each entry
+is further subdivided into versions. Each version entry contains two obligatory files: `up.sql`, and `down.sql`.
+You may also find an optional `devel.sql` file there.
+
+Each entry that is assigned a version number is read-only. That means,
+this version was officially released to the public and cannot be modified.
+
+Folders named `current` contain migrations that are still under development. Each folder may contain many revision
+entries,
+where each entry links to one migration(even a small one). Such an approach allows for better granularity.
+Upon a new official release, these entries will be merged and assigned a version number, making them read-only.
+
+### Migrations
+
+Migrations allow us to evolve database schema over time. Each migration can be applied (up.sql) or reverted (down.sql).
+Applying and immediately reverting a migration should leave your database schema unchanged.
+
+#### up.sql
+
+It contains all the required SQL operations to apply migration.
+
+Example:
+
+```
+CREATE TABLE posts (
+ id SERIAL PRIMARY KEY,
+ title VARCHAR NOT NULL,
+ body TEXT NOT NULL,
+ published BOOLEAN NOT NULL DEFAULT FALSE
+)
+```
+
+#### down.sql
+
+Similarly, this file contains all the required SQL operations to revert migration.
+
+Example:
+
+```
+DROP TABLE posts
+```
+
+#### Optional: devel.sql
+
+You can put there SQL operations that you would use during product development. They will be applied
+along with the `up.sql`.
+> Development features are not applied as part of official releases. They are only being used
+> for internal development needs.
+
+### Specifying database version set
+
+When generating the product image, the migration script retrieves the information it needs from the `databases.json`
+file.
+This file lists the databases used by the product and their specific versions. Such an approach allows for re-usability
+of the same database schemas in different products.
+
+An example of such a file is listed below:
+
+```
+{
+ "product": {
+ "databases":[
+ {"name": "db_1", "version": "0"},
+ {"name": "db_2", "version": "2"},
+ {"name": "db_3", "version": "0"},
+ {"name": "db_4", "version": "4"},
+ {"name": "db_5", "version": "0"},
+ {"name": "db_6", "version": "1"},
+ {"name": "db_8", "version": "0"},
+ {"name": "db_9", "version": "1"},
+ {"name": "db_n", "version": "0"}
+ ]
+ }
+}
+```
+
+You do not need to modify this file, as all the required modifications are applied
+automatically by the migration tool.
+
+## Migration tool
+
+Database migration operations are handled by using the [migration tool](../tools/db_migration.py).
+
+Currently, it supports:
+
+```
+init - init migration environment
+revision - create a new migration revision
+commit - merge all the existing revisions and generate a new frozen database version
+upgrade - upgrade the database to the specific revision
+install - generates database set and then installs it in the specified output directory
+revert - execute down.sql script for the newest migration
+redo - execute down.sql and up.sql scripts for the newest migration
+```
+
+All the required info regarding specific subcommands can be printed by passing `--help` as its argument, for instance:
+
+```
+python3 db_migration.py revision --help
+usage: db_migration.py revision [-h] [-e ENV] --db DB -m MESSAGE
+
+Creates a new database migration revision
+
+optional arguments:
+ -h, --help show this help message and exit
+ -e ENV, --env ENV environment location
+ --db DB database name
+ -m MESSAGE, --message MESSAGE
+ revision message
+
+```
+
+## Adding database migration step-by-step
+
+Below, there is a detailed step-by-step instruction on how to add/update database migration:
+
+1. Add CMake build directory to the env variable: `export DB_MIGRATION_ENV=<build_dir>`. This step is optional but
+ highly recommended. Without it, you would need to pass it each time you invoke the migration tool.
+2. Execute `python3 db_migration.py revision --db <database_you_want_to_update> -m "Short description of the revision"`.
+ This will create a `current` folder(if it doesn't exist already) and place a revision entry under it.
+ The revision entry looks like this:
+ ```
+ current
+ └── 65a9ee3e_description
+ ├── .meta
+ ├── devel.sql
+ ├── down.sql
+ └── up.sql
+ ```
+ Its name(`65a9ee3e_description`) is created from the first 8 chars of the assigned unique ID and description string
+ that you passed to the command. The script also generated a set of empty SQL files: `up.sql`, `down.sql`
+ and `devel.sql`
+ that you will have to fill up. `.meta` file contains various info about revisions like creation date, unique
+ identifier and more. **Do not modify or remove it!**
+3. Add required SQL to `up.sql` and `down.sql`. Remember that `down.sql` should revert all the changes introduced
+ by `up.sql`.
+4. If you do not need development-specific migration, you can remove `devel.sql`.
+5. To apply newly added migration, you can use either:
+ * `python3 db_migration.py upgrade --db <database>`
+ * `{product}-disk-img` build target
+6. You can also upgrade a database to the specific revision by
+ invoking `python3 db_migration.py upgrade --db <database> --rev <revision_id>`. Revision ID can be obtained from
+ the revision's meta file.
+7. Congratulations, you've managed to update the database schema.
+
+## Committing revisions
+
+When the time has come for issuing a new official database release all you have to do is invoke the:
+
+`python3 db_migration.py commit`
+
+This command will do the following steps for each database specified in `databases.json`:
+
+* merge all the existing revisions
+* assign a new database version number to them
+* copy merged revisions to the newly created version directory
+* remove the `current` folder
+
+You can also commit each database manually by invoking the:
+
+`python3 db_migration.py commit --db <database_name>`
+
+> These operations are irreversible.
@@ 1,17 1,50 @@
{
- "PurePhone": {
- "databases":[
- {"name": "events", "version": "0"},
- {"name": "multimedia", "version": "0"},
- {"name": "alarms", "version": "0"},
- {"name": "calllog", "version": "0"},
- {"name": "contacts", "version": "0"},
- {"name": "custom_quotes", "version": "0"},
- {"name": "notes", "version": "0"},
- {"name": "notifications", "version": "0"},
- {"name": "predefined_quotes", "version": "0"},
- {"name": "settings_v2", "version": "0"},
- {"name": "sms", "version": "0"}
- ]
- }
+ "PurePhone": {
+ "databases": [
+ {
+ "name": "events",
+ "version": "0"
+ },
+ {
+ "name": "multimedia",
+ "version": "0"
+ },
+ {
+ "name": "alarms",
+ "version": "0"
+ },
+ {
+ "name": "calllog",
+ "version": "0"
+ },
+ {
+ "name": "contacts",
+ "version": "0"
+ },
+ {
+ "name": "custom_quotes",
+ "version": "0"
+ },
+ {
+ "name": "notes",
+ "version": "0"
+ },
+ {
+ "name": "notifications",
+ "version": "0"
+ },
+ {
+ "name": "predefined_quotes",
+ "version": "0"
+ },
+ {
+ "name": "settings_v2",
+ "version": "0"
+ },
+ {
+ "name": "sms",
+ "version": "0"
+ }
+ ]
+ }
}=
\ No newline at end of file
@@ 0,0 1,465 @@
+#!/usr/bin/python3
+# Copyright (c) 2017-2023, Mudita Sp. z.o.o. All rights reserved.
+# For licensing, see https://github.com/mudita/MuditaOS/LICENSE.md
+import os
+import uuid
+import sqlite3
+from argparse import ArgumentParser
+from pathlib import Path
+import sys
+import datetime
+import json
+import shutil
+import traceback
+import itertools
+
+# Constants
+up_script = "up.sql"
+down_script = "down.sql"
+devel_script = "devel.sql"
+meta_file = ".meta"
+databases_set = "databases.json"
+env_file = "dbm_env.ini"
+
+license_header = f"-- Copyright (c) 2017-{datetime.date.today().year}, Mudita Sp. z.o.o. All rights reserved.\n" \
+ "-- For licensing, see https://github.com/mudita/MuditaOS/LICENSE.md\n\n"
+
+cli = ArgumentParser()
+subparsers = cli.add_subparsers(dest="subcommand")
+
+
+def subcommand(args=[], parent=subparsers):
+ def decorator(func):
+ parser = parent.add_parser(func.__name__, description=func.__doc__)
+ for arg in args:
+ parser.add_argument(*arg[0], **arg[1])
+ parser.set_defaults(func=func)
+
+ return decorator
+
+
+def argument(*name_or_flags, **kwargs):
+ return [*name_or_flags], kwargs
+
+
+class RevisionMetadata:
+ _key_id = "id"
+ _key_date = "date"
+ _key_message = "message"
+ _key_parent = "parent"
+ file_name = ".meta"
+
+ def __init__(self, id, date, message, parent):
+ self.set = {RevisionMetadata._key_id: str(id), RevisionMetadata._key_date: date,
+ RevisionMetadata._key_message: message, RevisionMetadata._key_parent: parent}
+
+ def id(self):
+ return self.set[RevisionMetadata._key_id]
+
+ def parent(self):
+ return self.set[RevisionMetadata._key_parent]
+
+ def message(self):
+ return self.set[RevisionMetadata._key_message]
+
+ @classmethod
+ def from_file(cls, path: Path):
+ with open(path, "r") as f:
+ raw = json.load(f)
+ return cls(raw[cls._key_id], raw[cls._key_date], raw[cls._key_message], raw[cls._key_parent])
+
+ def dump_to_file(self, path: Path):
+ with path.open('a') as file:
+ file.write(json.dumps(self.set, indent=1))
+
+
+class ConstRevisionEntry:
+ def __init__(self, dir: Path):
+ self.dir = dir
+ self.metadata = RevisionMetadata.from_file(dir / RevisionMetadata.file_name)
+
+ def read_sql(self):
+ with open(self.dir / up_script) as f:
+ up = f.read()
+ with open(self.dir / down_script) as f:
+ down = f.read()
+ try:
+ with open(self.dir / devel_script) as f:
+ devel = f.read()
+ except OSError:
+ devel = None
+ return up, down, devel
+
+
+class RevisionEntry:
+ def __init__(self, base_dir: Path, message: str):
+ self.id = uuid.uuid4()
+ self.base_dir = base_dir
+ self.dir = base_dir / "{id}_{message}".format(id=str(self.id)[:8], message=message.replace(' ', '_'))
+ self.date = datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S")
+ self.message = message
+ latest_revision = get_latest_revision(self.base_dir)
+ self.metadata = RevisionMetadata(self.id, self.date, self.message,
+ latest_revision.metadata.id() if latest_revision else 0)
+
+ def spawn(self):
+ Path.mkdir(self.dir, exist_ok=True, parents=True)
+ self.metadata.dump_to_file(self.dir / RevisionMetadata.file_name)
+ self._build_sql_template()
+
+ def _build_sql_template(self):
+ with (self.dir / up_script).open('w') as file:
+ file.write(self._sql_header())
+
+ with (self.dir / down_script).open('w') as file:
+ file.write(self._sql_header())
+
+ with (self.dir / devel_script).open('w') as file:
+ file.write(self._sql_header())
+
+ def _sql_header(self):
+ return f'-- Message: {self.message}\n' \
+ f'-- Revision: {self.id}\n' \
+ f'-- Create Date: {self.date}\n\n' \
+ f'-- Insert SQL here\n'
+
+
+class DatabaseSet:
+ def __init__(self, path: Path):
+ self.key_db_version = "version"
+ self.key_db_name = "name"
+ self.key_db_array = "databases"
+
+ with open(path, "r") as f:
+ self.set = json.load(f)
+
+ self.path = path
+ self.product = list(self.set.keys())[0]
+
+ def get_database_version(self, db_name: str):
+ v = next(
+ d[self.key_db_version] for d in self.set[self.product][self.key_db_array] if
+ d[self.key_db_name] == db_name)
+ return int(v)
+
+ def db_array(self):
+ return self.set[self.product][self.key_db_array]
+
+ def list_databases_by_name(self):
+ return set([database["name"] for database in self.db_array()])
+
+ def modify_database_version(self, db_name: str, version: int):
+ entry = next(d for d in self.set[self.product][self.key_db_array] if d[self.key_db_name] == db_name)
+ entry[self.key_db_version] = str(version)
+ with open(self.path, 'w') as file:
+ file.write(json.dumps(self.set, indent=1))
+
+
+class Migration:
+ env_var = "DB_MIGRATION_ENV"
+ _rev_base_dir = "current"
+
+ def _get_env(self, path: Path):
+ """Tries to fetch environment settings from the given file"""
+ with open(path / env_file) as f:
+ data = json.load(f)
+
+ data["output_dir"] = Path(data["output_dir"])
+ data["dirs"][:] = [Path(e) for e in data["dirs"]]
+ data["db_set_dir"] = Path(data["db_set_dir"])
+ return data
+
+ def _get_db_set(self):
+ return DatabaseSet(self._env["db_set_dir"])
+
+ def _get_database_path(self, db_name):
+ return next(d / db_name for d in self._env["dirs"] if (Path(d) / db_name).exists())
+
+ def _invoke_sql(self, db_name, script_name):
+ base_dir = self._get_database_path(db_name) / Migration._rev_base_dir
+ if not base_dir.exists():
+ print("Nothing to invoke")
+ return
+
+ rev = get_latest_revision(base_dir)
+ execute_db_script(self._env["output_dir"] / f"{db_name}.db", rev.dir / script_name)
+
+ def __init__(self, env_path: Path):
+ self._env = self._get_env(env_path) if env_path else self._get_env(Path(os.environ.get(Migration.env_var)))
+
+ self.db_names = [os.listdir(d) for d in self._env["dirs"]]
+ self.db_names = list(itertools.chain(*self.db_names))
+
+ def upgrade(self, db_name, rev, devel):
+ print(f"Upgrading '{db_name}', devel features: {devel}")
+
+ db_path = self._get_database_path(db_name)
+
+ Path.mkdir(self._env["output_dir"], exist_ok=True, parents=True)
+ # Remove old database, if exists
+ Path.unlink(self._env["output_dir"] / f"{db_name}.db", missing_ok=True)
+
+ # First, migrate using already committed db version from database set file
+ version = self._get_db_set().get_database_version(db_name)
+ print(f"-> Upgrading to committed version: {version}")
+ migrate_database_up(db_name, db_path, self._env["output_dir"], version, devel)
+
+ # Check if 'current' directory exists and apply current revision list
+ current_path = db_path / Migration._rev_base_dir
+ if not current_path.exists():
+ return
+
+ revisions = build_revision_entries(current_path)
+
+ if rev is None:
+ print(f"-> Upgrading to the newest available revision: {revisions[-1].metadata.id()}")
+ revisions_range = revisions[:]
+ else:
+ # Upgrade up to the specified revision
+ revisions_range = build_revision_entries_up_to(revisions, rev)
+ if not revisions_range:
+ print(f"-> revision: {rev} does not exist")
+ return
+
+ print(f"-> Upgrading to the revision: {rev}")
+
+ for revision in revisions_range:
+ meta = revision.metadata
+ print(f" -> Running upgrade from {meta.parent()} to {meta.id()}")
+ execute_db_script(self._env["output_dir"] / f"{db_name}.db", revision.dir / up_script)
+
+ if devel and os.path.exists(revision.dir / devel_script):
+ execute_db_script(self._env["output_dir"] / f"{db_name}.db", revision.dir / devel_script)
+
+ def install(self, devel):
+ shutil.rmtree(self._env["output_dir"], ignore_errors=True)
+ Path.mkdir(self._env["output_dir"], exist_ok=True, parents=True)
+
+ databases_to_migrate = self._get_db_set().list_databases_by_name().intersection(self.db_names)
+
+ print(f"Database set to be upgraded and installed: {databases_to_migrate}")
+ for db_name in databases_to_migrate:
+ self.upgrade(db_name, None, devel)
+
+ # Populate output dir with migration scripts, skip 'devel.sql' scripts
+ for d in self._env["dirs"]:
+ shutil.copytree(d, self._env["output_dir"] / "migration", dirs_exist_ok=True,
+ ignore=shutil.ignore_patterns(devel_script))
+
+ def commit(self, db_name):
+ db_path = self._get_database_path(db_name)
+ current_path = db_path / Migration._rev_base_dir
+ upgrade_version = self._get_db_set().get_database_version(db_name) + 1
+
+ print(f"Committing database '{db_name}':")
+
+ if not current_path.exists():
+ print("->Nothing to commit")
+ return
+
+ # Prepare new version directory structure
+ version_path = db_path / str(upgrade_version)
+ Path.mkdir(db_path / version_path, exist_ok=True, parents=True)
+
+ merge_sql_from_dir(current_path, db_path / version_path)
+
+ self._get_db_set().modify_database_version(db_name, upgrade_version)
+
+ shutil.rmtree(current_path)
+
+ print(f"->New version generated from commit: {upgrade_version}")
+
+ def commit_all(self):
+ for db_name in self._get_db_set().list_databases_by_name():
+ self.commit(db_name)
+
+ def revision(self, db_name, message):
+ base_dir = self._get_database_path(db_name) / Migration._rev_base_dir
+
+ Path.mkdir(base_dir, exist_ok=True, parents=True)
+ entry = RevisionEntry(base_dir, message)
+ entry.spawn()
+ print(f"Added new revision: {entry.metadata.id()}")
+
+ def revert(self, db_name):
+ self._invoke_sql(db_name, down_script)
+
+ def redo(self, db_name):
+ self._invoke_sql(db_name, down_script)
+ self._invoke_sql(db_name, up_script)
+
+
+def build_revision_entries(base: Path):
+ """ Builds the list of ConstRevisionEntry entries where each child is placed after its parent.
+ Revision_1(id=1,parent=0) -> Revision_2(id=2,parent=1) -> Revision_n(id=n,parent=2)
+ """
+
+ metas = []
+
+ for entry in base.iterdir():
+ metas.append(ConstRevisionEntry(entry))
+
+ chain = []
+ parent_index = 0
+ for _ in metas:
+ try:
+ entry = next(d for d in metas if d.metadata.parent() == parent_index)
+ parent_index = entry.metadata.id()
+ chain.append(entry)
+ except StopIteration:
+ break
+ return chain
+
+
+def build_revision_entries_up_to(revisions, rev):
+ """ Try to build the list of ConstRevisionEntry entries from the already existing list of revisions up to the
+ specified revision. For instance, Revision_1(id=1,parent=0) -> Revision_2(id=2,parent=1) -> Revision_n(id=rev,
+ parent=2)
+ """
+ if next((r for r in revisions if r.metadata.id() == rev), [None]):
+ revisions_range = []
+ for r in revisions:
+ revisions_range.append(r)
+ if r.metadata.id() == rev:
+ return revisions_range
+ else:
+ return None
+
+
+def get_latest_revision(base: Path):
+ """Obtains the latest ConstRevisionEntry """
+ chain = build_revision_entries(base)
+ return None if len(chain) == 0 else chain[-1]
+
+
+def merge_sql_from_dir(directory: Path, out: Path):
+ revisions = build_revision_entries(directory)
+
+ # Merge up/down.sql
+ with open(out / up_script, 'w') as up_file, open(out / down_script, 'w') as down_file, open(out / devel_script,
+ 'w') as devel_file:
+ up_file.write(license_header)
+ down_file.write(license_header)
+ devel_file.write(license_header)
+ for rev in revisions:
+ print(f"->Merging revision: {rev.metadata.id()}")
+ sql_up, _, sql_devel = rev.read_sql()
+ up_file.write(sql_up + '\n')
+
+ if sql_devel:
+ devel_file.write(sql_devel + '\n')
+
+ # Down scripts need to be merged in reversed order
+ for rev in reversed(revisions):
+ _, sql_down, _ = rev.read_sql()
+ down_file.write(sql_down + '\n')
+
+
+def execute_db_script(db_path: Path, script: Path, version: int = None):
+ connection = sqlite3.connect(db_path)
+ with open(script) as ms:
+ connection.executescript(ms.read())
+ connection.commit()
+ if version:
+ connection.execute(f"PRAGMA user_version = {version};")
+ connection.commit()
+ connection.close()
+
+
+def migrate_database_up(database: str, migration_path: os.path, dst_directory: os.path, dst_version: int, devel: bool):
+ db_name_full = f"{database}.db"
+ dst_db_path = dst_directory / db_name_full
+ Path(dst_db_path).unlink(missing_ok=True)
+
+ for i in range(dst_version + 1):
+ migration_script = migration_path / str(i) / up_script
+ devel_script_path = migration_path / str(i) / devel_script
+ execute_db_script(dst_db_path, migration_script, i)
+ if devel and os.path.exists(devel_script_path):
+ execute_db_script(dst_db_path, devel_script_path, i)
+
+
+@subcommand([argument("-e", "--env", help="where to store environment configuration", required=True, type=Path),
+ argument("--dbset", help="location of the file describing database set", required=True, type=Path),
+ argument("-o", "--out", help="where to store generated databases", required=True, type=Path),
+ argument("--dirs",
+ help="list of migration base directories. It's important to pass product-specific directory as "
+ "a first element on the list",
+ action='append',
+ nargs='*',
+ required=True,
+ type=Path)])
+def init(args):
+ """Initializes migration environment"""
+ env = {"db_set_dir": args.dbset.as_posix(), "output_dir": args.out.as_posix(),
+ "dirs": [a[0].as_posix() for a in args.dirs]}
+ with open(args.env / env_file, 'w') as f:
+ f.write(json.dumps(env, indent=1))
+
+
+@subcommand([argument("-e", "--env", help="environment location", type=Path),
+ argument("--db", help="database name", required=True, type=str),
+ argument("-m", "--message", help="revision message", required=True, type=str)])
+def revision(args):
+ """Creates a new database migration revision"""
+ Migration(args.env).revision(args.db, args.message)
+
+
+@subcommand([argument("-e", "--env", help="environment location", type=Path),
+ argument("--db", help="database name", type=str)])
+def commit(args):
+ """Commits current set of SQL statements and updates database version number"""
+ if args.db:
+ Migration(args.env).commit(args.db)
+ else:
+ Migration(args.env).commit_all()
+
+
+@subcommand(
+ [argument("-e", "--env", help="environment location", type=Path),
+ argument("-d", "--devel", help="with development schema", default=False)])
+def install(args):
+ """ Generates database set and then installs it in the specific output directory. It also populates output
+ directory with corresponding migration scripts"""
+ Migration(args.env).install(args.devel)
+
+
+@subcommand(
+ [argument("-e", "--env", help="environment location", type=Path),
+ argument("--db", help="database name", type=str, required=True),
+ argument("-r", "--revision", help="target revision", type=str),
+ argument("-d", "--devel", help="with development schema", default=False)])
+def upgrade(args):
+ """ Upgrades database to the specific revision(or the newest one if --revision parameter omitted)"""
+ Migration(args.env).upgrade(args.db, args.revision, args.devel)
+
+
+@subcommand([argument("-e", "--env", help="environment location", type=Path),
+ argument("--db", help="database name", type=str, required=True)])
+def revert(args):
+ """ Runs the (down.sql) for the specified database for the most recent migration"""
+ Migration(args.env).revert(args.db)
+
+
+@subcommand([argument("-e", "--env", help="environment location", type=Path),
+ argument("--db", help="database name", type=str, required=True)])
+def redo(args):
+ """ Runs the (down.sql) and then the (up.sql) for the most recent migration"""
+ Migration(args.env).redo(args.db)
+
+
+def main() -> int:
+ args = cli.parse_args()
+ if args.subcommand is None:
+ cli.print_help()
+ return 1
+ else:
+ try:
+ args.func(args)
+ except:
+ print(traceback.format_exc())
+ return 1
+
+
+if __name__ == "__main__":
+ sys.exit(main())
@@ 1,127 0,0 @@
-#!/usr/bin/python3
-# Copyright (c) 2017-2022, Mudita Sp. z.o.o. All rights reserved.
-# For licensing, see https://github.com/mudita/MuditaOS/LICENSE.md
-
-# import required module
-import os
-import sqlite3
-import argparse
-import logging
-import sys
-import json
-import shutil
-from pathlib import Path
-
-log = logging.getLogger(__name__)
-logging.basicConfig(format='%(asctime)s [%(levelname)s]: %(message)s', level=logging.INFO)
-
-databases_json_filename = "databases.json"
-scripts_folder_name = "scripts"
-migration_folder_name = "migration"
-
-
-# this helper script creates DBs from SQL schema files
-def migrate_database_up(database: str, migration_path: os.path, dst_directory: os.path, dst_version: int, devel: bool):
- connection = None
-
- db_name_full = f"{database}.db"
- dst_db_path = os.path.join(dst_directory, db_name_full)
- Path(dst_db_path).unlink(missing_ok=True)
-
- ret = 0
- try:
- connection = sqlite3.connect(dst_db_path)
- log.info(f"\nPerforming up-migration of {database} to {dst_version}")
- for i in range(dst_version + 1):
- migration_script = os.path.join(migration_path, *[database, str(i), "up.sql"])
- devel_script = os.path.join(migration_path, *[database, str(i), "devel.sql"])
- with open(migration_script) as ms:
- connection.executescript(ms.read())
- connection.commit()
- if devel and os.path.exists(devel_script):
- with open(devel_script) as ds:
- connection.executescript(ds.read())
- connection.commit()
- connection.execute(f"PRAGMA user_version = {i};")
- connection.commit()
-
- except OSError as e:
- log.error(f"System error: {e}")
- ret = 1
- except sqlite3.Error as e:
- log.error(f"[SQLite] {database} database error: {e}")
- ret = 1
- finally:
- if connection:
- connection.close()
-
- return ret
-
-
-def migrate_database_wrapper(migration_path: os.path, json: json, dst_directory: os.path, devel: bool) -> int:
- product = list(json.keys())[0]
- databases_json = json[product]["databases"]
- databases = os.listdir(migration_path)
- databases_to_migrate = set([database["name"] for database in databases_json]).intersection(databases)
-
- for database in databases_json:
- name = database["name"]
- if name in databases_to_migrate:
- migrate_database_up(name, migration_path, dst_directory, int(database["version"]), devel)
-
- return 0
-
-
-def main() -> int:
- parser = argparse.ArgumentParser(description='Create databases from schema scripts')
- parser.add_argument('--common_path',
- metavar='common_path',
- type=str,
- help='path to common databases scripts',
- required=True)
-
- parser.add_argument('--product_path',
- metavar='product_path',
- type=str,
- help='path to product-specific databases scripts',
- required=True)
-
- parser.add_argument('--output_path',
- metavar='db_path',
- type=str,
- help='destination path for databases',
- required=True)
-
- parser.add_argument('--development',
- metavar='devel',
- type=bool,
- help='with development schema scripts',
- default=False)
-
- args = parser.parse_args()
-
- ret = 0
-
- json_path = os.path.join(args.product_path, databases_json_filename)
- json_data = None
-
- if os.path.exists(json_path):
- with open(json_path, "r") as json_file:
- json_data = json.load(json_file)
- else:
- log.error("Json file does not exists!")
- return 1
-
- if not os.path.exists(args.output_path):
- os.makedirs(args.output_path, exist_ok=True)
-
- for database_path in [args.common_path, args.product_path]:
- migration_path = os.path.join(database_path, migration_folder_name)
- ret |= migrate_database_wrapper(migration_path, json_data, args.output_path, args.development)
- shutil.copytree(migration_path, os.path.join(args.output_path, migration_folder_name), dirs_exist_ok=True)
-
- return ret
-
-
-if __name__ == "__main__":
- sys.exit(main())