Alembic support for multiple Postgres schemas

前端 未结 3 1846
我寻月下人不归
我寻月下人不归 2020-12-15 06:25

How can I use Alembic\'s --autogenerate to migrate multiple Postgres schemas that are not hard-coded in the SQL Alchemy model? (mirror question of SQLA

3条回答
  •  被撕碎了的回忆
    2020-12-15 07:18

    from issue 409, the application to tenant-specific schemas of upgrade/downgrade operations can most easily be done using translated schema names, which is also how you would normally be doing the main application as well for multi-tenant.

    Go into env.py:

    def run_migrations_online():
    
        connectable = engine_from_config(
            config.get_section(config.config_ini_section),
            prefix='sqlalchemy.',
            poolclass=pool.NullPool)
    
        with connectable.connect() as connection:
            for tenant_schema_name in all_my_tenant_names:
                 conn = connection.execution_options(schema_translate_map={None: tenant_schema_name}
    
                logger.info("Migrating tenant schema %s" % tenant_schema_name)
                context.configure(
                    connection=conn,
                    target_metadata=target_metadata
                )
    
                # to do each tenant in its own transaction.
                # move this up to do all tenants in one giant transaction
                with context.begin_transaction():
                    context.run_migrations()
    

    Above will translate the "None" schema name into the given tenant name. If the application shares tenant-based schemas with a default schema that has global tables, then you'd be using some token like "tenant_schema" as the symbol:

    for tenant_schema_name in all_my_tenant_names:
         conn = connection.execution_options(schema_translate_map={"tenant_schema": tenant_schema_name}
    

    and in migration files refer to "tenant_schema" where the actual tenant-specific schema name goes:

    def upgrade():
        op.alter_column("some_table", "some_column", , schema="tenant_schema")
    

    For the "autogenerate" case, the solution @nick-retallack provides has some more of the pieces you would use on that end, namely the use of include_schemas so that autogenerate looks only at a "specimen" schema that represents the latest version of the tenant-specific schema.

    In order to set up env.py to use the right system for the right command, the behaviors can be controlled using user-defined options with migration_context.get_x_argument().

提交回复
热议问题