greatest-n-per-group

SQL Server - Only Select Latest Date

泄露秘密 提交于 2019-12-11 10:43:27
问题 RDBMS = Microsoft SQL Server I work for a refrigeration company and we want to do a better job of tracking the cost bottles of refrigerant were bought at for each inventory location. I am trying to create a SQL Query that pulls this information but I am running into some issues. For each inventory location I want to display the last cost refrigerant was bought at for that inventory location.I want to see the latest date we have record of for this location purchasing a specific refrigerant. I

Selecting Greatest N records in X groups

假如想象 提交于 2019-12-11 10:42:46
问题 So I've been running through all the questions under the greatest-n-per-group tag, and either I don't understand what I'm reading, or nothing has fit my needs so far. This link has also provided a lot of useful information, but still no answer. So I've got a table with the following fields: id (unique int) user_id (int) category (varchar) score (int) interest (int) I believe my problem strays from the common greatest-n-per-group question, in that I don't need the greatest N for every group. I

Return only most recent entry per id

亡梦爱人 提交于 2019-12-11 10:38:36
问题 I'm running this SQL script on my Firebird 2.5-DB: SELECT aktivitaet.creationdatetime, (select STRINGPROPVALUE from PROPERTY WHERE PROPERTYNAME LIKE 'GlobalDokPfad') as basispfad, aktivitaet.pfad, cast(rechnung.datum as date), rechnung.nummer, projekt.code, cast(rechnung.verrtotal as numeric(10,2)), projekt.betreffend FROM rechnung INNER JOIN aktivitaetenlink ON rechnung.bold_id=aktivitaetenlink.eintraege INNER JOIN aktivitaet ON aktivitaetenlink.aktivitaeten=aktivitaet.bold_id Left JOIN

Optimized querying in PostgreSQL

别来无恙 提交于 2019-12-11 10:29:21
问题 Assume you have a table named tracker with following records. issue_id | ingest_date | verb,status 10 2015-01-24 00:00:00 1,1 10 2015-01-25 00:00:00 2,2 10 2015-01-26 00:00:00 2,3 10 2015-01-27 00:00:00 3,4 11 2015-01-10 00:00:00 1,3 11 2015-01-11 00:00:00 2,4 I need the following results 10 2015-01-26 00:00:00 2,3 11 2015-01-11 00:00:00 2,4 I am trying out this query select * from etl_change_fact where ingest_date = (select max(ingest_date) from etl_change_fact); However, this gives me only

Oracle group part of row and get row with latest timestamp

独自空忆成欢 提交于 2019-12-11 09:36:18
问题 I have a select that gives me this result: - ID | System | Type1 | NID | Name_ | Type2__ | Date - 24 | AA-Tool | PRIV | 816 | Name1 | IMPLICIT | 17.12.2014 - 24 | AA-Tool | PRIV | 816 | Name1 | EXPLICIT | 19.12.2014 - 24 | AA-Tool | PRIV | 816 | Name1 | EXPLICIT | 20.12.2014 - 25 | BB-Tool | PRIV | 817 | Name2 | EXPLICIT | 20.12.2014 - 25 | BB-Tool | PRIV | 817 | Name2 | EXPLICIT | 21.12.2014 So ID, System, Type1, NID and Name should be distinct and Type2 and Date should be the last entry by

MYSQL Select 5 records for the last 5 distinct records

坚强是说给别人听的谎言 提交于 2019-12-11 09:10:40
问题 I've searched on here for a few different ways to do this, but can't quite get this to work. Basically, I have a table with a record of images added to a website. Each image is put into this table. I want to grab the first 5 images from each distinct Added field. So, the table may look like this: ID File Folder Added ---------------------------------- 13 13.jpg Event3 20130830 12 12.jpg Event3 20130830 11 11.jpg Event3 20130830 10 10.jpg Event3 20130830 9 9.jpg Event3 20130830 8 8.jpg Event2

Delete all rows but one with the greatest value per group

和自甴很熟 提交于 2019-12-11 08:24:03
问题 So, I just recently asked a question: Update using a subquery with aggregates and groupby in Postgres and it turns out I was going about my issue with flawed logic. In the same scenario in the question above, instead of updating all the rows to have the max quantity, I'd like to delete the rows that don't have the max quantity (and any duplicate max quantities). Essentially I need to just convert the below to a delete statement that preserves only the largest quantities per item_name. I'm

How I can get Second max salary using “over(partition by)” in oracle SQL?

孤街浪徒 提交于 2019-12-11 07:43:36
问题 I already get it by doing this query: SELECT * FROM ( SELECT emp_id,salary,row_number() over(order by salary desc) AS rk FROM test_qaium ) where rk=2; But one of my friend ask me to find second MAX salary from employees table must using " over(partition by ) " in oracle sql. Anybody please help me. And clear me the concept of " Partition by " in oracle sql. 回答1: Oracle Setup : CREATE TABLE test_qaium ( emp_id, salary, department_id ) AS SELECT 1, 10000, 1 FROM DUAL UNION ALL SELECT 2, 20000,

doctrine dbal get latest chat message per group [duplicate]

▼魔方 西西 提交于 2019-12-11 07:40:02
问题 This question already has answers here : Doctrine Query Language get Max/Latest Row Per Group (4 answers) Closed 2 years ago . when trying make multiple select $result = $this->qb->select('c.id','c.message','acc.name as chat_from', 'c.chat_to as count') ->addSelect("SELECT * FROM chat ORDER BY date_deliver DESC") ->from($this->table,'c') ->join('c','account','acc', 'c.chat_from = acc.id') ->orderBy('date_sent','DESC') ->groupby('chat_from') ->where('chat_to ='.$id) ->execute(); return $result

Is there a way to find TOP X records with grouped data?

不打扰是莪最后的温柔 提交于 2019-12-11 07:28:07
问题 I'm working with a Sybase 12.5 server and I have a table defined as such: CREATE TABLE SomeTable( [GroupID] [int] NOT NULL, [DateStamp] [datetime] NOT NULL, [SomeName] varchar(100), PRIMARY KEY CLUSTERED (GroupID,DateStamp) ) I want to be able to list, per [GroupID], only the latest X records by [DateStamp]. The kicker is X > 1, so plain old MAX() won't cut it. I'm assuming there's a wonderfully nasty way to do this with cursors and what-not, but I'm wondering if there is a simpler way