db2

db2 command list

烈酒焚心 提交于 2019-12-09 23:29:16
工作一个多月了,因为公司要用DB2数据库,所以总是努力去看这方面的书,一段时间来有点体会也总结了一些常用的DB2命令,发出来给大家分享吧!希望对大家会有所帮忙,呵呵。。 启动DB2服务:db2start 关闭DB2服务: db2stop 一、加载数据: 1、 以默认分隔符加载,默认为“,”号 db2 "import from btpoper.txt of del insert into btpoper" 2、 以指定分隔符“|”加载 db2 "import from btpoper.txt of del modified by coldel| insert into btpoper" 二、卸载数据: 1、 卸载一个表中全部数据 db2 "export to btpoper.txt of del select * from btpoper" db2 "export to btpoper.txt of del modified by coldel| select * from btpoper" 2、 带条件卸载一个表中数据 db2 "export to btpoper.txt of del select * from btpoper where brhid='907020000'" db2 "export to cmmcode.txt of del select * from

How to Create Transportable Tablespaces Where the Source and Destination are ASM-Based (Doc ID 394798.1)

℡╲_俬逩灬. 提交于 2019-12-09 23:15:27
How to Create Transportable Tablespaces Where the Source and Destination are ASM-Based (Doc ID 394798.1) APPLIES TO: Oracle Database - Enterprise Edition - Version 10.1.0.2 to 11.2.0.3 [Release 10.1 to 11.2] Oracle Database Cloud Schema Service - Version N/A and later Oracle Database Exadata Express Cloud Service - Version N/A and later Oracle Database Exadata Cloud Machine - Version N/A and later Oracle Cloud Infrastructure - Database Service - Version N/A and later Information in this document applies to any platform. ***Checked for relevance on 28-May-2010*** GOAL The purpose of this note

DB2 Partitioning

风流意气都作罢 提交于 2019-12-09 23:14:58
问题 I know how partitioning in DB2 works but I am unaware about where this partition values exactly get stored. After writing a create partition query, for example: CREATE TABLE orders(id INT, shipdate DATE, …) PARTITION BY RANGE(shipdate) ( STARTING '1/1/2006' ENDING '12/31/2006' EVERY 3 MONTHS ) after running the above query we know that partitions are created on order for every 3 month but when we run a select query the query engine refers this partitions. I am curious to know where this

Which is the Best database for Rails application?

放肆的年华 提交于 2019-12-09 22:04:31
问题 I am developing a Rails application that will access a lot of RSS feeds or crawl sites for data (mostly news). It will be something like Google News but with a different approach, so I'll store a lot of news (or news summaries), classify them in different categories and use ranking and recommendation techniques. Should I go with MySQL? Is it worthwhile using IBM DB2 purexml to store the doucuments? Also Ruby search implementations (Ferret, Ultrasphinx and others) are not needed If I choose

MySQL 部署分布式架构 MyCAT (一)

三世轮回 提交于 2019-12-09 20:01:08
架构 环境 主机名 IP db1 192.168.31.205 db2 192.168.31.206 前期准备 开启防火墙,安装配置 mysql (db1,db2) firewall-cmd --permanent --add-rich-rule="rule family="ipv4" source address="192.168.31.0/24" accept" firewall-cmd --reload mkdir /software # 把软件 mysql-5.7.20-linux-glibc2.12-x86_64.tar.gz 上传到 /software cd /usr/local/ tar zxf /software/mysql-5.7.20-linux-glibc2.12-x86_64.tar.gz mv mysql-5.7.20-linux-glibc2.12-x86_64 mysql # 初始化数据 mkdir /data/33{07..10}/data -p mysqld --initialize-insecure --user=mysql --datadir=/data/3307/data --basedir=/usr/local/mysql mysqld --initialize-insecure --user=mysql --datadir=/data

How do I get connection pooling working on a PHP-CGI PDO iSeries Access UnixODBC Connection?

杀马特。学长 韩版系。学妹 提交于 2019-12-09 18:58:47
问题 I am trying to get connection pooling working using PHP/PDO with a UnixODBC driver called iSeries Access for Linux. I do not set the PDO::ATTR_PERSISTENT in my PDO constructor as I want to use pooling and not persistence (I am in a PHP-CGI environment). Using the "Connection Pooling" section of http://www.ibm.com/developerworks/systems/library/es-linux_bestpract.html I have placed Pooling = Yes in my odbc.ini and CPTimeout = 600 in my odbcinst.ini However, it seems that the ODBC driver is not

MySQL 二进制日志 binlog

泪湿孤枕 提交于 2019-12-09 18:49:34
MySQL 5.7 开启 binlog 修改 my.cnf 文件 [mysqld] log-bin=[/存放目录/]mysql-bin #注意 mysql 可读写“存放目录”,默认数据存放目录 expire_logs_days=7 #保留7天内修改过的 binglog 文件 max_binlog_size=512M #单个 binlog 文件大小上限,默认1G #指定或忽略要复制的数据库,存在跨库问题 binlog_do_db=db1 binlog_db_db=db2 #binlog_ignore_db=db1 #binlog_ignore_db=db2 常用操作 查看所有 binlog 文件列表 show master logs; 查看 master 状态,包含最新 binlog 文件名和 position show master status; 清除过期 binlog 文件,并使用新编号的 binlog 文件开始记录日志 flush logs; 删除 binlog 文件 删除旧的 binlog 文件 purge master logs to 'mysql-bin.000573'; purge master logs before '2018-04-18 06:00:00'; purge master logs before DATE_SUB(NOW(), INTERVAL 2

AS/400 DB2 Logical File vs Table Index

[亡魂溺海] 提交于 2019-12-09 18:03:22
问题 I'm coming from a MSSQL background, and when I ask people at my company if they've created Indexes on certain columns they'll say yes but point me to these things call Logical Files. In the iSeries Navigator these Logical Files show up under the 'Views' category. When I click the 'Indexes' category nothing is there, leading me to believe that there are actually no Indexes created on any columns, at least as I understand them. A Logical File appears to be a View sorted by certain columns. So

Format date to string

你离开我真会死。 提交于 2019-12-09 13:29:50
问题 I'm trying to format a db2 date into a string as "YYYY/MM/DD". The best I got so far is: SELECT CAST(YEAR(MYDATE) AS VARCHAR(4)) || '/' || CAST(MONTH(MYDATE) AS VARCHAR(2)) || '/' || RIGHT('00' || CAST(DAY(MYDATE) AS VARCHAR(2)), 2) FROM MYCALENDAR Is there a better, terser way to do this? ps: Toying around with locales is not an option. 回答1: According to the IBM documentation the following should work: SELECT VARCHAR_FORMAT(MYDATE, 'YYYY/MM/DD') FROM MYCALENDAR; 来源: https://stackoverflow.com

DB2 catalog 编目

偶尔善良 提交于 2019-12-09 12:37:05
( 步骤 )ap用户: (1)进入db2 db2 (2)catalog database 命令 catalog db list (3)查看本地节点目录、IP、节点名、服务名称、目录条目类型 list node directory (4)取消节点编目 uncatalog node ADP (5)编目一个tcp/ip节点 catalog tcpip node /*ADP 节点名*/ remote /*10.108.48.205 IP地址*/ server /*50000 端口号*/ (6)退出db2 terminate 知识点: 1.查看本地节点目录 list node directory 2.编目一个TCP/IP节点 catalog tcpip node n_aaa remote ip_xxx server 50000 3.取消节点编目 uncatalog node n_aaa 4.查看系统数据库目录 list db directory 5.编目数据库 catalog db db_aaa as db_bbb at node n_aaa 6.取消数据库编目 uncatalog db db_bbb 7.配置实例 catalog TCPIP node node_1 remote 192.168.0.1 server 50000 catalog db db_aaa as db_bbb at