Skip to main content This browser is no longer supported. Show
Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support. Microsoft Access Data Types
In this articleThe following table shows the Microsoft Access data types, data types used to create tables, and ODBC SQL data types.
[1] Access 4.0 applications only. Maximum length of 4000 bytes. Behavior similar to LONGBINARY. [2] ANSI applications only. [3] Unicode and Access 4.0 applications only. Note SQLGetTypeInfo returns ODBC data types. It will not return all Microsoft Access data types if more than one Microsoft Access type is mapped to the same ODBC SQL data type. All conversions in Appendix D of the ODBC Programmer's Reference are supported for the SQL data types listed in the previous table. The following table shows limitations on Microsoft Access data types.
More limitations on data types can be found in Data Type Limitations. FeedbackSubmit and view feedback for Additional resourcesAdditional resourcesIn this articleSkip to main content This browser is no longer supported. Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support. Copy and transform data to and from SQL Server by using Azure Data Factory or Azure Synapse Analytics
In this article
APPLIES TO: Azure Data Factory Azure Synapse AnalyticsThis article outlines how to use the copy activity in Azure Data Factory and Azure Synapse pipelines to copy data from and to SQL Server database and use Data Flow to transform data in SQL Server database. To learn more read the introductory article for Azure Data Factory or Azure Synapse Analytics. Supported capabilitiesThis SQL Server connector is supported for the following capabilities:
① Azure integration runtime ② Self-hosted integration runtime For a list of data stores that are supported as sources or sinks by the copy activity, see the Supported data stores table. Specifically, this SQL Server connector supports:
SQL Server Express LocalDB is not supported. PrerequisitesIf your data store is located inside an on-premises network, an Azure virtual network, or Amazon Virtual Private Cloud, you need to configure a self-hosted integration runtime to connect to it. If your data store is a managed cloud data service, you can use the Azure Integration Runtime. If the access is restricted to IPs that are approved in the firewall rules, you can add Azure Integration Runtime IPs to the allow list. You can also use the managed virtual network integration runtime feature in Azure Data Factory to access the on-premises network without installing and configuring a self-hosted integration runtime. For more information about the network security mechanisms and options supported by Data Factory, see Data access strategies. Get startedTo perform the Copy activity with a pipeline, you can use one of the following tools or SDKs:
Create a SQL Server linked service using UIUse the following steps to create a SQL Server linked service in the Azure portal UI.
Connector configuration detailsThe following sections provide details about properties that are used to define Data Factory and Synapse pipeline entities specific to the SQL Server database connector. Linked service propertiesThis SQL server connector supports the following authentication types. See the corresponding sections for details.
Tip If you hit an error with the error code "UserErrorFailedToConnectToSqlServer" and a message like "The session limit for the database is XXX and has been reached," add SQL authenticationTo use SQL authentication, the following properties are supported:
Example: Use SQL authentication
Example: Use SQL authentication with a password in Azure Key Vault
Example: Use Always Encrypted
Windows authenticationTo use Windows authentication, the following properties are supported:
Note Windows authentication is not supported in data flow. Example: Use Windows authentication
Example: Use Windows authentication with a password in Azure Key Vault
Dataset propertiesFor a full list of sections and properties available for defining datasets, see the datasets article. This section provides a list of properties supported by the SQL Server dataset. To copy data from and to a SQL Server database, the following properties are supported:
Example
Copy activity propertiesFor a full list of sections and properties available for use to define activities, see the Pipelines article. This section provides a list of properties supported by the SQL Server source and sink. SQL Server as a sourceTo copy data from SQL Server, set the source type in the copy activity to SqlSource. The following properties are supported in the copy activity source section:
Note the following points:
Example: Use SQL query
Example: Use a stored procedure
The stored procedure definition
SQL Server as a sinkTo copy data to SQL Server, set the sink type in the copy activity to SqlSink. The following properties are supported in the copy activity sink section:
Example 1: Append data
Example 2: Invoke a stored procedure during copy Learn more details from Invoke a stored procedure from a SQL sink.
Example 3: Upsert data
Parallel copy from SQL databaseThe SQL Server connector in copy activity provides built-in data partitioning to copy data in parallel. You can find data partitioning options on the Source tab of the copy activity.
When
you enable partitioned copy, copy activity runs parallel queries against your SQL Server source to load data by partitions. The parallel degree is controlled by the You are suggested to enable parallel copy with data partitioning especially when you load large amount of data from your SQL Server. The following are suggested configurations for different scenarios. When copying data into file-based data store, it's recommended to write to a folder as multiple files (only specify folder name), in which case the performance is better than writing to a single file.
Best practices to load data with partition option:
Example: full load from large table with physical partitions
Example: query with dynamic range partition
Sample query to check physical partition
If the table has physical partition, you would see "HasPartition" as "yes" like the following.
Best practice for loading data into SQL ServerWhen you copy data into SQL Server, you might require different write behavior:
See the respective sections for how to configure and best practices. Append dataAppending data is the default behavior of this SQL Server sink connector. the service does a bulk insert to write to your table efficiently. You can configure the source and sink accordingly in the copy activity. Upsert dataCopy activity now supports natively loading data into a database temporary table and then update the data in sink table if key exists and otherwise insert new data. To learn more about upsert settings in copy activities, see SQL Server as a sink. Overwrite the entire tableYou can configure the preCopyScript property in a copy activity sink. In this case, for each copy activity that runs, the service runs the script first. Then it runs the copy to insert the data. For example, to overwrite the entire table with the latest data, specify a script to first delete all the records before you bulk load the new data from the source. Write data with custom logicThe steps to write data with custom logic are similar to those described in the Upsert data section. When you need to apply extra processing before the final insertion of source data into the destination table, you can load to a staging table then invoke stored procedure activity, or invoke a stored procedure in copy activity sink to apply data. Invoke a stored procedure from a SQL sinkWhen you copy data into SQL Server database, you also can configure and invoke a user-specified stored procedure with additional parameters on each batch of the source table. The stored procedure feature takes advantage of table-valued parameters. Note that the service automatically wraps the stored procedure in its own transaction, so any transaction created inside the stored procedure will become a nested transaction, and could have implications for exception handling. You can use a stored procedure when built-in copy mechanisms don't serve the purpose. An example is when you want to apply extra processing before the final insertion of source data into the destination table. Some extra processing examples are when you want to merge columns, look up additional values, and insert into more than one table. The following sample shows how to use a stored procedure to do an upsert into a table in the SQL Server database. Assume that the input data and the sink Marketing table each have three columns: ProfileID, State, and Category. Do the upsert based on the ProfileID column, and only apply it for a specific category called "ProductA".
Mapping data flow propertiesWhen transforming data in mapping data flow, you can read and write to tables from SQL Server Database. For more information, see the source transformation and sink transformation in mapping data flows. Note To access on premise SQL Server, you need to use Azure Data Factory or Synapse workspace Managed Virtual Network using a private endpoint. Refer to this tutorial for detailed steps. Source transformationThe below table lists the properties supported by SQL Server source. You can edit these properties in the Source options tab.
Tip The common table expression (CTE) in SQL is not supported in the mapping data flow Query mode, because the prerequisite of using this mode is that queries can be used in the SQL query FROM clause but CTEs cannot do this. To use CTEs, you need to create a stored procedure using the following query:
Then use the Stored procedure mode in the source transformation of the mapping data flow and set the SQL Server source script exampleWhen you use SQL Server as source type, the associated data flow script is:
Sink transformationThe below table lists the properties supported by SQL Server sink. You can edit these properties in the Sink options tab.
Tip
SQL Server sink script exampleWhen you use SQL Server as sink type, the associated data flow script is:
Data type mapping for SQL ServerWhen you copy data from and to SQL Server, the following mappings are used from SQL Server data types to Azure Data Factory interim data types. Synapse pipelines, which implement Data Factory, use the same mappings. To learn how the copy activity maps the source schema and data type to the sink, see Schema and data type mappings.
Note For data types that map to the Decimal interim type, currently Copy activity supports precision up to 28. If you have data that requires precision larger than 28, consider converting to a string in a SQL query. When copying data from SQL Server using Azure Data Factory, the bit data type is mapped to the Boolean interim data type. If you have data that need to be kept as the bit data type, use queries with T-SQL CAST or CONVERT. Lookup activity propertiesTo learn details about the properties, check Lookup activity. To learn details about the properties, check GetMetadata activity Using Always EncryptedWhen you copy data from/to SQL Server with Always Encrypted, follow below steps:
Note SQL Server Always Encrypted supports below scenarios:
Note Currently, SQL Server Always Encrypted is only supported for source transformation in mapping data flows. Native change data captureAzure Data Factory can support native change data capture capabilities for SQL Server, Azure SQL DB and Azure SQL MI. The changed data including row insert, update and deletion in SQL stores can be automatically detected and extracted by ADF mapping dataflow. With the no code experience in mapping dataflow, users can easily achieve data replication scenario from SQL stores by appending a database as destination store. What is more, users can also compose any data transform logic in between to achieve incremental ETL scenario from SQL stores. Make sure you keep the pipeline and activity name unchanged, so that the checkpoint can be recorded by ADF for you to get changed data from the last run automatically. If you change your pipeline name or activity name, the checkpoint will be reset, which leads you to start from beginning or get changes from now in the next run. If you do want to change the pipeline name or activity name but still keep the checkpoint to get changed data from the last run automatically, please use your own Checkpoint key in dataflow activity to achieve that. When you debug the pipeline, this feature works the same. Be aware that the checkpoint will be reset when you refresh your browser during the debug run. After you are satisfied with the pipeline result from debug run, you can go ahead to publish and trigger the pipeline. At the moment when you first time trigger your published pipeline, it automatically restarts from the beginning or gets changes from now on. In the monitoring section, you always have the chance to rerun a pipeline. When you are doing so, the changed data is always captured from the previous checkpoint of your selected pipeline run. Example 1:When you directly chain a source transform referenced to SQL CDC enabled dataset with a sink transform referenced to a database in a mapping dataflow, the changes happened on SQL source will be automatically applied to the target database, so that you will easily get data replication scenario between databases. You can use update method in sink transform to select whether you want to allow insert, allow update or allow delete on target database. The example script in mapping dataflow is as below.
Example 2:If you want to enable ETL scenario instead of data replication between database via SQL CDC, you can use expressions in mapping dataflow including isInsert(1), isUpdate(1) and isDelete(1) to differentiate the rows with different operation types. The following is one of the example scripts for mapping dataflow on deriving one column with the value: 1 to indicate inserted rows, 2 to indicate updated rows and 3 to indicate deleted rows for downstream transforms to process the delta data.
Known limitation:
Troubleshoot connection issues
Next stepsFor a list of data stores supported as sources and sinks by the copy activity, see Supported data stores. FeedbackSubmit and view feedback for Additional resourcesAdditional resourcesIn this articleWhich of the following is a field that contains a unique value for each record?Primary Key - a field containing a value that uniquely identifies each record in a table.
Which of the following is used to find data in a database?Queries. Queries can perform many different functions in a database. Their most common function is to retrieve specific data from the tables.
Which data type assigns a unique value to each record in a table?A primary key's main features are: It must contain a unique value for each row of data. It cannot contain null values. Every row must have a primary key value.
Which of the following is used to uniquely identify records in a table?Primary key A table can have only one primary key. A primary key consists of one or more fields that uniquely identify each record that you store in the table.
|