Oracle

mysql5.7.20:安装教程

て烟熏妆下的殇ゞ 提交于 2021-02-07 20:34:52
从mysql官网下载安装包:/mysql-5.7.20-linuxglibc2.12-x86_64.tar.gz # 切换目录 cd /usr/ local # 解压下载的安装包 tar -zxvf /software/mysql/mysql- 5.7 . 20 -linux-glibc2. 12 -x86_64.tar. gz # 重命名 mv mysql- 5.7 . 20 -linux-glibc2. 12 - x86_64 mysql # 建立数据存储目录 mkdir data # 建立用户组 groupadd mysql # 建立用户,并禁止用户登录 useradd -r -s /sbin/nologin -g mysql mysql -d /usr/ local / mysql # 改变文件归属 chown -R mysql.mysql /usr/ local /mysql/ # 初始化系统数据库,记住不能用./bin/mysql_install_db,已经过期了 ./bin/mysqld --initialize --user=mysql --basedir=/usr/ local /mysql/ --datadir=/usr/ local /mysql/data/ 初始化后,会打印日志,如下 ,注意看最后输出,红色标记部分,这个就是root的临时密码。 2018 -

How can I retrieve next n unlocked rows from Oracle?

自古美人都是妖i 提交于 2021-02-07 20:19:02
问题 Suppose I have Oracle table books store n books info with columns id and title . Some of the tuples are locked by SELECT ... FOR UPDATE clause. Suppose these rows whose id in (1, 2, 4, 5, 6, 9) are get locked. Now I want to write a SQL to achieve that when execute it, return the next 2 records which are unlocked. And the SQL may be called by multiple process at same time. That is to say, the first call will return id = 3 and id = 7 records; the second call will return id = 8 and id = 10

Oracle CONNECT BY recursive child to parent query, include ultimate parent that self references

假装没事ソ 提交于 2021-02-07 19:59:18
问题 In the following example id parent_id A A B A C B select id, parent_id from table start with id = 'A' connect by nocycle parent_id = prior id I get A A B A C B In my database I have millions of rows in the table and deep and wide hierarchies and I'm not interested in all children. I can derive the children I'm interested in. So I want to turn the query on its head and supply START WITH with the children ids. I then want to output the parent recursively until I get to the top. In my case the

Oracle CONNECT BY recursive child to parent query, include ultimate parent that self references

六眼飞鱼酱① 提交于 2021-02-07 19:58:38
问题 In the following example id parent_id A A B A C B select id, parent_id from table start with id = 'A' connect by nocycle parent_id = prior id I get A A B A C B In my database I have millions of rows in the table and deep and wide hierarchies and I'm not interested in all children. I can derive the children I'm interested in. So I want to turn the query on its head and supply START WITH with the children ids. I then want to output the parent recursively until I get to the top. In my case the

dbms_scheduler Create Job Not running Job

大兔子大兔子 提交于 2021-02-07 19:55:18
问题 I am trying to run a procedure through a dbms_scheduler but it is just getting created but not running. DataBase Version Used Oracle 11.2.x Procedure create or replace procedure count_comp as Total_count number; begin select count(*) into Total_count from user_tables; dbms_output.put_line('Number '|| Total_count); end; Create Job BEGIN DBMS_SCHEDULER.CREATE_JOB ( job_name => 'My_Count_Job', job_type => 'STORED_PROCEDURE', job_action => 'count_comp', start_date => '28-APR-08 07.00.00 PM Asia

dbms_scheduler Create Job Not running Job

痞子三分冷 提交于 2021-02-07 19:53:12
问题 I am trying to run a procedure through a dbms_scheduler but it is just getting created but not running. DataBase Version Used Oracle 11.2.x Procedure create or replace procedure count_comp as Total_count number; begin select count(*) into Total_count from user_tables; dbms_output.put_line('Number '|| Total_count); end; Create Job BEGIN DBMS_SCHEDULER.CREATE_JOB ( job_name => 'My_Count_Job', job_type => 'STORED_PROCEDURE', job_action => 'count_comp', start_date => '28-APR-08 07.00.00 PM Asia

Spark error - Decimal precision 39 exceeds max precision 38

拜拜、爱过 提交于 2021-02-07 19:34:20
问题 When I try to collect data from Spark dataframe, I get an error stating "java.lang.IllegalArgumentException: requirement failed: Decimal precision 39 exceeds max precision 38". All the data which is in Spark dataframe is from Oracle database, where I believe decimal precision is <38. Is there any way I can achieve this without modifying the data? # Load required table into memory from Oracle database df <- loadDF(sqlContext, source = "jdbc", url = "jdbc:oracle:thin:usr/pass@url.com:1521" ,

Spark error - Decimal precision 39 exceeds max precision 38

守給你的承諾、 提交于 2021-02-07 19:31:45
问题 When I try to collect data from Spark dataframe, I get an error stating "java.lang.IllegalArgumentException: requirement failed: Decimal precision 39 exceeds max precision 38". All the data which is in Spark dataframe is from Oracle database, where I believe decimal precision is <38. Is there any way I can achieve this without modifying the data? # Load required table into memory from Oracle database df <- loadDF(sqlContext, source = "jdbc", url = "jdbc:oracle:thin:usr/pass@url.com:1521" ,

Spark error - Decimal precision 39 exceeds max precision 38

家住魔仙堡 提交于 2021-02-07 19:31:24
问题 When I try to collect data from Spark dataframe, I get an error stating "java.lang.IllegalArgumentException: requirement failed: Decimal precision 39 exceeds max precision 38". All the data which is in Spark dataframe is from Oracle database, where I believe decimal precision is <38. Is there any way I can achieve this without modifying the data? # Load required table into memory from Oracle database df <- loadDF(sqlContext, source = "jdbc", url = "jdbc:oracle:thin:usr/pass@url.com:1521" ,

Create oracle scheduler job which runs daily

那年仲夏 提交于 2021-02-07 18:32:12
问题 I want to create oracle scheduler job which runs daily at 20:00 and runs for 30 minute. This job will delete the rows from KPI_LOGS table as this table contains large amount of data and it continues to grow. I have created the below script in oracle sql developer for such job but not sure if this is correct or not as i am new to scheduler job concept. BEGIN DBMS_SCHEDULER.CREATE_JOB ( job_name => '"RATOR_MONITORING"."CROP_KPI_LOGS"', job_type => 'PLSQL_BLOCK', job_action => 'DELETE FROM KPI