greatest-n-per-group

Limit join to one row

自古美人都是妖i 提交于 2019-12-07 06:24:28
问题 I have the following query: SELECT sum((select count(*) as itemCount) * "SalesOrderItems"."price") as amount, 'rma' as "creditType", "Clients"."company" as "client", "Clients".id as "ClientId", "Rmas".* FROM "Rmas" JOIN "EsnsRmas" on("EsnsRmas"."RmaId" = "Rmas"."id") JOIN "Esns" on ("Esns".id = "EsnsRmas"."EsnId") JOIN "EsnsSalesOrderItems" on("EsnsSalesOrderItems"."EsnId" = "Esns"."id" ) JOIN "SalesOrderItems" on("SalesOrderItems"."id" = "EsnsSalesOrderItems"."SalesOrderItemId") JOIN

Postgres, table1 left join table2 with only 1 row per ID in table1

独自空忆成欢 提交于 2019-12-07 05:06:25
问题 Ok, so the title is a bit convoluted. This is basically a greatest-n-per-group type problem, but I can't for the life of me figure it out. I have a table, user_stats: ------------------+---------+--------------------------------------------------------- id | bigint | not null default nextval('user_stats_id_seq'::regclass) user_id | bigint | not null datestamp | integer | not null post_count | integer | friends_count | integer | favourites_count | integer | Indexes: "user_stats_pk" PRIMARY KEY

Select rows with Max Value grouped by two columns

雨燕双飞 提交于 2019-12-07 04:17:48
问题 I have seen quite a few solutions on this kind of problem (esp. this one SQL Select only rows with Max Value on a Column), but none of these seem to be appropriate: I have the following table layout, a versioning of attachments, which are bound to entities: TABLE attachments +------+--------------+----------+----------------+---------------+ | id | entitiy_id | group_id | version_number | filename | +------+--------------+----------+----------------+---------------+ | 1 | 1 | 1 | 1 | file1-1

MySQL world database Trying to avoid subquery

别等时光非礼了梦想. 提交于 2019-12-06 17:56:24
I'm using the MySQL WORLD database. For each Continent, I want to return the Name of the country with the largest population. I was able to come up with a query that works. Trying to find another query that uses join only and avoid the subquery. Is there a way to write this query using JOIN? SELECT Continent, Name FROM Country c1 WHERE Population >= ALL (SELECT Population FROM Country c2 WHERE c1.continent = c2.continent); +---------------+----------------------------------------------+ | Continent | Nanme | +---------------+----------------------------------------------+ | Oceania | Australia

Using DISTINCT inside JOIN is creating trouble [duplicate]

99封情书 提交于 2019-12-06 17:38:28
This question already has an answer here : Closed 6 years ago . Possible Duplicate: How can I modify this query with two Inner Joins so that it stops giving duplicate results? I'm having trouble getting my query to work. SELECT itpitems.identifier, itpitems.name, itpitems.subtitle, itpitems.description, itpitems.itemimg, itpitems.mainprice, itpitems.upc, itpitems.isbn, itpitems.weight, itpitems.pages, itpitems.publisher, itpitems.medium_abbr, itpitems.medium_desc, itpitems.series_abbr, itpitems.series_desc, itpitems.voicing_desc, itpitems.pianolevel_desc, itpitems.bandgrade_desc, itpitems

MySQL - Max() return wrong result

佐手、 提交于 2019-12-06 17:26:27
I tried this query on MySQL server (5.1.41)... SELECT max(volume), dateofclose, symbol, volume, close, market FROM daily group by market I got this result: max(volume) dateofclose symbol volume close market 287031500 2010-07-20 AA.P 500 66.41 AMEX 242233000 2010-07-20 AACC 16200 3.98 NASDAQ 1073538000 2010-07-20 A 4361000 27.52 NYSE 2147483647 2010-07-20 AAAE.OB 400 0.01 OTCBB 437462400 2010-07-20 AAB.TO 31400 0.37 TSX 61106320 2010-07-20 AA.V 0 0.24 TSXV As you can see, the maximum volume is VERY different from the 'real' value of the volume column?!? The volume column is define as int(11)

How do I limit a LEFT JOIN to the 1st result in SQL Server?

六月ゝ 毕业季﹏ 提交于 2019-12-06 16:50:56
问题 I have a bit of SQL that is almost doing what I want it to do. I'm working with three tables, a Users, UserPhoneNumbers and UserPhoneNumberTypes. I'm trying to get a list of users with their phone numbers for an export. The database itself is old and has some integrity issues. My issue is that there should only ever be 1 type of each phone number in the database but thats not the case. When I run this I get multi-line results for each person if they contain, for example, two "Home" numbers.

Get the minimum non zero value across multiple columns

社会主义新天地 提交于 2019-12-06 15:46:08
问题 Let's say I have the following table: CREATE TABLE numbers ( key integer NOT NULL DEFAULT 0, number1 integer NOT NULL DEFAULT 0, number2 integer NOT NULL DEFAULT 0, number3 integer NOT NULL DEFAULT 0, number4 integer NOT NULL DEFAULT 0, CONSTRAINT pk PRIMARY KEY (key), CONSTRAINT nonzero CHECK (key <> 0) ) What I want to retrieve is the minimum value from a given key of all 4 numbers, but ignoring those that are zero. I started with something like this when I figured that I'll have problem

What's the most efficient way to generate this report?

冷暖自知 提交于 2019-12-06 15:32:50
Given a table (daily_sales) with say 100k rows of the following data/columns: id rep sales date 1 a 123 12/15/2011 2 b 153 12/15/2011 3 a 11 12/14/2011 4 a 300 12/13/2011 5 a 120 12/12/2011 6 b 161 11/15/2011 7 a 3 11/14/2011 8 c 13 11/14/2011 9 c 44 11/13/2011 What would be the most efficient way to write a report (completely in SQL) showing the two most recent entries (rep, sales, date) for each name, so the output would be: a 123 12/15/2011 a 11 12/14/2011 b 153 12/15/2011 b 161 11/15/2011 c 13 11/14/2011 c 44 11/13/2011 Thanks! For MySQL, explained in @Quassnoi's blog , an index on (name,

select the most recent entry

本小妞迷上赌 提交于 2019-12-06 15:21:28
I have the following table: LOCATION_ID, PERSON_ID, DATE 3, 65, 2016-06-03 7, 23, 2016-10-28 3, 23, 2016-08-05 5, 65, 2016-07-14 I want to build a select query in PL/SQL to select the records with the most recent location_id per person_id. For the above sample, the desired result should be: LOCATION_ID, PERSON_ID, DATE 5, 65, 2016-07-14 7, 23, 2016-10-28 (DATE expressed as 'YYYY-MM-DD') Thank you! The other proposals are correct but the most compact and fastest solution is most likely when you use FIRST_VALUE and LAST_VALUE Analytic Functions SELECT DISTINCT FIRST_VALUE(LOCATION_ID) OVER