Writing stored procedures when using dynamic schema names in sql server

余生颓废 提交于 2021-02-19 04:14:03

问题


The application, I have been currently working with has different schema names for its tables, for example Table1 can have multiple existence say A.Table1 and B.Table1. All my stored procedures are stored under dbo. I'm writing the below stored procedures using dynamic SQL. I'm currently using SQL Server 2008 R2 and soon it will be migrated to SQL Server 2012.

create procedure dbo.usp_GetDataFromTable1
@schemaname varchar(100),
@userid bigint
as
begin
    declare @sql nvarchar(4000)
    set @sql='select a.EmailID from '+@schemaname+'.Table1 a where a.ID=@user_id';
    exec sp_executesql @sql, N'@user_id bigint', @user_id=@userid
end

Now my questions are, 1. Is this type of approach affects the performance of my stored procedure? 2. If performance is affected, then how to write procedures for this kind of scenario?


回答1:


The best way around this would be a redesign, if at all possible.

You can even implement this retrospectively by adding a new column to replace the schema, for example: Profile, then merge all tables from each schema into one in a single schema (e.g. dbo).

Then your procedure would appear as follows:

create procedure dbo.usp_GetDataFromTable1
@profile int,
@userid bigint
as
begin
    select a.EmailID from dbo.Table1 a 
    where a.ID = @user_id
    and a.Profile = @profile
end

I have used an int for the profile column, but if you use a varchar you could even keep your schema name for the profile value, if that helps to make things clearer.




回答2:


I would look at a provisioning approach, where you dynamically create the tables and stored procedures as part of some up-front process. I'm not 100% sure of your scenario, but perhaps this could be when you add a new user. Then, you can call these SP's by convention in the application.

For example, new user creation calls an SP which creates c.Table and c.GetDetails SP.

then in the app you can call c.GetDetails based on "c" being a property of the user definition.

This gets you around any security concerns from using dynamic SQL. It's still dynamic, but is built once up front.




回答3:


Dynamic schema and same table structure is quite unusual, but you can still obtain what you want using something like this:

declare @sql nvarchar(4000)
declare @schemaName VARCHAR(20) = 'schema'
declare @tableName VARCHAR(20) = 'Table'
-- this will fail, as the whole string will be 'quoted' within [..]
-- declare @tableName VARCHAR(20) = 'Category; DELETE FROM TABLE x;'

set @sql='select * from ' + QUOTENAME(@schemaName) + '.' + QUOTENAME(@tableName)
PRINT @sql

-- @user_id is not used here, but it can if the query needs it
exec sp_executesql @sql, N'@user_id bigint', @user_id=0

So, QUOTENAME should keep on the safe side regarding SQL injection.

1. Performance - dynamic SQL cannot benefit from some performance improvements (I think procedure associated statistics or something similar), so there is a performance risk.

However, for simple things that run on rather small amount of data (tens of millions at most) and for data that is not heavily changes (inserts and deletes), I don't think you will have noticeable problems.

2. Alternative -bukko has suggested a solution. Since all tables have the same structure, they can be merged. If it becomes huge, good indexing and partitioning should be able to reduce query execution times.




回答4:


Dynamic Sql usually effects both performance and security, most of the times for the worst. However, since you can't parameterize identifiers, this is probably the only way for you unless you are willing to duplicate your stored procedures for each schema:

create procedure dbo.usp_GetDataFromTable1
@schemaname varchar(100),
@userid bigint
as
begin
    if @schemaname = 'a' 
    begin
        select EmailID from a.Table1 where ID = @user_id
    end
    else if schemaname = 'b' 
    begin
        select EmailID from b.Table1 where ID = @user_id
    end

end



回答5:


The only reason I can think of for doing this is satisfying multiple tenants. You're close but the approach you are taking is wrong.

There are 3 solutions for multi-tenancy which I'm aware of: Database per tenant, single database schema per tenant, or single database single schema (aka, tenant by row).

Two of these have already been mentioned by other users here. The one that hasn't really been detailed is schema per tenant which is what it looks like you fall under. For this approach you need to change the way you see the database. The database at this point is just a container for schemas. Each schema can have their own design, stored procs, triggers, queues, functions, etc. The main goal is data isolation. You don't want tenant A seeing tenant Bs stuff. The advantage of the schema per tenant approach is you can be more flexible with tenant specific database changes. It also allows you to scale easier than a database per tenant approach.

Answer: Instead of writing dynamic SQL to take into account the schema using the DBO user you should instead create the same stored proc for each schema (create procedure example: schema_name.stored_proc_name). In order to run the stored proc for a schema you'll need to impersonate a user that is tied to the schema in question. It would look something like this:

execute as user = 'tenantA'
exec sp_testing
revert --revert will take us back to the original user, most likely DBO in your case.

Data collation across all tenants is a little harder. The only solution that I'm aware of is to run using the DBO user and "union all" the results across all schemas separately, kind of tedious if you have a ton of schemas.




回答6:


There is a work around for this if you know what schemas you are going to be using. You stated here that schema name is created on signup, we use this approach on login. I have a view which I add or remove unions from on session startup/dispose. Example below.

CREATE VIEW [engine].[vw_Preferences]
AS
SELECT TOP (0) CAST (NULL AS NVARCHAR (255)) AS SessionID,
               CAST (NULL AS UNIQUEIDENTIFIER) AS [PreferenceGUID],
               CAST (NULL AS NVARCHAR (MAX)) AS [Value]
UNION ALL SELECT 'ZZZ_7756404F411B46138371B45FB3EA6ADB', * FROM ZZZ_7756404F411B46138371B45FB3EA6ADB.Preferences
UNION ALL SELECT 'ZZZ_CE67D221C4634DC39664975494DB53B2', * FROM ZZZ_CE67D221C4634DC39664975494DB53B2.Preferences
UNION ALL SELECT 'ZZZ_5D6FB09228D941AC9ECD6C7AC47F6779', * FROM ZZZ_5D6FB09228D941AC9ECD6C7AC47F6779.Preferences
UNION ALL SELECT 'ZZZ_5F76B619894243EB919B87A1E4408D0C', * FROM ZZZ_5F76B619894243EB919B87A1E4408D0C.Preferences
UNION ALL SELECT 'ZZZ_A7C5ED1CFBC843E9AD72281702FCC2B4', * FROM ZZZ_A7C5ED1CFBC843E9AD72281702FCC2B4.Preferences

The first select top 0 row is a fall back so I always have a default definition, and a static table definition. You can select from the view and filter by a session id with

SELECT  PreferenceGUID, Value
  FROM  engine.vw_Preferences
 WHERE  SessionID = 'ZZZ_5D6FB09228D941AC9ECD6C7AC47F6779';

The interesting part here though is how the execution plan is generated when you have static values inside a view. the unions that would not produce results are not evaluated by the code, leaving a basic execution plan without any joins or unions...

You can test this, it and it is just as efficient as reading directly from the table (to within a margin of error so minor nobody would care). It is even possible to replace the write back processes by using "instead" triggers and then building dynamic sql in the background. The dynamic sql is less efficient on writes but it means you can update any table via the view, usually only possible with a single table view.



来源:https://stackoverflow.com/questions/35505282/writing-stored-procedures-when-using-dynamic-schema-names-in-sql-server

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!