Data pump is really a handy tool. It has many options allowing you to try different ways and provides flexibility. In this entry, I'll use some of these options and try to explain how to export a remote database to the local ASM disk group.

Let's jump in. Start with setting the environment at the local server:

1. Create a directory in ASM disk group:

SQL> alter diskgroup RECOVERY add directory '+RECOVERY/DPUMP';

2. Create a directory in the database pointing to the ASM directory:

SQL> create directory ASM_DIR as '+RECOVERY/DPUMP';

3. Grant write on directory created above to the user you'll use while exporting:

SQL> grant read, write on ASM_DIR to <export_user>;

3. Create database a link:

SQL> create public database link target_db connect to <remote_user> identified by <password> using '<tns_entry>';

It's required that both the local and remote users are granted to the EXP_FULL_DATABASE role.


Now it's time to export. At local server, run:


To briefly explain parameters given:
  • NETWOK_LINK is the one pointing to remote database. This is how we export remote database.
  • As you may see, dump file is being created under directory ASM_DIR pointing the directory in ASM disk group.
  • Notice that log file is not being created under ASM directory because it's not supported. That's why I've used default data pump directory. Also you may choose not to log by using parameter NOLOGFILE=Y instead.
  • We're having a full database dump by using FULL=Y parameter. Instead you may use SCHEMAS, TABLES or TABLESPACES parameters to export/import only given schemas/tables or tablespaces.
  • Our dump will not include statistics because of the parameter EXCLUDE=STATISTICS. You can use this option to exclude other objects such as tables or indexes, too. I choose to exclude statistics because I'd rather gather statistics at the database I'm going to import the dump.
  • By means of the parameter COMPRESSION we'll have a smaller sized dump file. To give a hint how much could it be reduced in size, here is a few statistics I've had with different characteristics of data:
EstimatedActualGain Percentage
687.7 MB188.13 MB72.64%
154.3 GB56.44 GB63.42%
287.2 GB207.93 GB27.6%

The reason of low compression ratio of the last one is that it is a table space with Hybrid Columnar Compression (HCC) enabled in a Exadata machine. It's already compressed quite a bit. It's possible to set COMPRESSION parameter with; ALL, METADATA_ONLY, DATA_ONLY or NONE.
  • Lastly, to improve performance we use the parameter PARALLEL. Data pump is a tool you can get full advantage of parallelism. Therefore it's good to use as much as possible.
Checking Up

After exporting you can query and check dump files. To query files exported under ASM:

SQL> select, f.bytes/1024/1024 mb, f.creation_date 
from v$asm_file f, v$asm_alias a
where f.file_number=a.file_number and a.system_created='N' and f.type='DUMPSET';

To check dump file, best tool to use is again data pump, use parameter SHOW:


Finally, if you need to copy the dump file to somewhere else, it could be achieved by asmcmd tool:

$ asmcmd -p cp RECOVERY/DPUMP/dump_file.dmp /destination_directory