问题
I've asked a few questions lately regarding database design, probably too many ;-) However I beleive I'm slowly getting to the heart of the matter with my design and am slowly boiling it down. I'm still wrestling with a couple of decisions regarding how "alerts" are stored in the database.
In this system, an alert is an entity that must be acknowledged, acted upon, etc.
Initially I related readings to alerts like this (very cut down) : -
[Location]
LocationId
[Sensor]
SensorId
LocationId
UpperLimitValue
LowerLimitValue
[SensorReading]
SensorReadingId
Value
Status
Timestamp
[SensorAlert]
SensorAlertId
[SensorAlertReading]
SensorAlertId
SensorReadingId
The last table is associating readings with the alert, because it is the reading that dictate that the sensor is in alert or not.
The problem with this design is that it allows readings from many sensors to be associated with a single alert - whereas each alert is for a single sensor only and should only have readings for that sensor associated with it (should I be bothered that the DB allows this though?).
I thought to simplify things, why even bother with the SensorAlertReading table? Instead I could do this:
[Location]
LocationId
[Sensor]
SensorId
LocationId
[SensorReading]
SensorReadingId
SensorId
Value
Status
Timestamp
[SensorAlert]
SensorAlertId
SensorId
Timestamp
[SensorAlertEnd]
SensorAlertId
Timestamp
Basically I'm not associating readings with the alert now - instead I just know that an alert was active between a start and end time for a particular sensor, and if I want to look up the readings for that alert I can do.
Obviously the downside is I no longer have any constraint stopping me deleting readings that occurred during the alert, but I'm not sure that the constraint is neccessary.
Now looking in from the outside as a developer / DBA, would that make you want to be sick or does it seem reasonable?
Is there perhaps another way of doing this that I may be missing?
Thanks.
EDIT: Here's another idea - it works in a different way. It stores each sensor state change, going from normal to alert in a table, and then readings are simply associated with a particular state. This seems to solve all the problems - what d'ya think? (the only thing I'm not sure about is calling the table "SensorState", I can't help think there's a better name (maybe SensorReadingGroup?) : -
[Location]
LocationId
[Sensor]
SensorId
LocationId
[SensorState]
SensorStateId
SensorId
Timestamp
Status
IsInAlert
[SensorReading]
SensorReadingId
SensorStateId
Value
Timestamp
There must be an elegant solution to this!
回答1:
Revised 01 Jan 11 21:50 UTC
Data Model
I think your Data Model should look like this:▶Sensor Data Model◀. (Page 2 relates to your other question re History).
Readers who are unfamiliar with the Relational Modelling Standard may find ▶IDEF1X Notation◀ useful.
Business (Rules Developed in the Commentary)
I did identify some early business Rules, which are now obsolete, so I have deleted them
These can be "read" in the Relations (read adjacent to the Data Model). The Business Rules and all implied Referential and Data Integrity can be implemented in, and thus guaranteed by, RULES, CHECK Constraints, in any ISO SQL database. This is a demonstration of IDEF1X, in the development of both the Relational keys, and the Entities and Relations. Note the Verb Phrases are more than mere flourish.
Apart from three Reference tables, the only static, Identifying entities are Location, NetworkSlave, and User. Sensor is central to the system, so I ahve given it its own heading.
Location
- A
Locationcontains one-to-manySensors - A
Locationmay have one Logger
NetworkSlave
- A NetworkSlave collects Readings for one-to-many NetworkSensors
User
- An
Usermay maintain zero-to-manyLocations - An
Usermay maintain zero-to-manySensors - An
Usermay maintain zero-to-manyNetworkSlaves - An
Usermay perform zero-to-manyDownloads - An
Usermay make zero-to-manyAcknowledgements, each on oneAlert - An
Usermay take zero-to-manyActions, each of oneActionType
Sensor
A
SensorTypeis installed as zero-to-manySensorsA
Logger(houses and) collectsReadingsfor oneLoggerSensorA
Sensoris either oneNetworkSensoror oneLoggerSensor- A
NetworkSensorrecordsReadingscollected by oneNetworkSlave
.
- A
- A
Loggeris periodicallyDownloadedone-to-many times- A
LoggerSensorrecordsReadingscollected by oneLogger
.
- A
- A
Readingmay be deemed inAlert, of oneAlertType- An
AlertTypemay happen on zero-to-manyReadings
.
- An
- An
Alertmay be oneAcknowledgement, by one User . - An
Acknowledgementmay be closed by oneAction, of oneActionType, by oneUser- An
ActionTypemay be taken on zero-to-manyActions
- An
Responses to Comments
Sticking
Idcolumns on everything that moves, interferes with the determination of Identifiers, the natural Relational keys that give your database relational "power". They are Surrogate Keys, which means an additional Key and Index, and it hinders that relational power; which results in more joins than otherwise necessary. Therefore I use them only when the Relational key becomes too cumbersome to migrate to the child tables (and accept the imposed extra join).Nullable keys are a classic symptom of an Unnormalised database. Nulls in the database is bad news for performance; but Nulls in FKs means each table is doing too many things, has too many meanings, and results is very poor code. Good for people who like to "refactor" their databases; completely unnecessary for a Relational database.
Resolved: An
Alertmay beAcknowledged; AnAcknowledgementmay beActioned.The columns above the line are the Primary Key (refer Notation document).
SensorNois a sequential number withinLocationId; refer Business Rules, it is meaningless outside aLocation; the two columns together form the PK. When you are ready to INSERT a Sensor (after you have checked that the attempt is valid, etc), it is derived as follows. This excludes LoggerSensors, which are zero:INSERT Sensor VALUES ( @LocationId, SensorNo = ( SELECT ISNULL(MAX(SensorNo), 0) + 1 FROM Sensor WHERE LocationId = @LocationId ) @SensorCode )For accuracy or improved meaning, I have changed
NetworkSlave monitors NetworkSensortoNetworkSlave collects Readings from NetworkSensor.Check Constraints. The
NetworkSensorandLoggerSensorare exclusive subtypes ofSensor, and their integrity can be set by CHECK constraints.Alerts, AcknowledgementsandActionsare not subtypes, but their integrity is set by the same method, so I will list them together.Every Relation in the Data Model is implemented as a CONSTRAINT in the child (or subtype) as FOREIGN KEY (child_FK_columns) REFERENCES Parent (PK_columns)
A Discriminator is required to identify which subtype a
Sensoris. This isSensorNo = 0forLoggerSensors; and non-zero forNetworkSensors.- The existence of
NetworkSensorsandLoggerSensorsare constrained by the FK CONSTRAINTS toNetworkSlaveandLogger, respectively; as well as to Sensor. - In
NetworkSensor, include a CHECK constraint to ensureSensorNois non-zero In
LoggerSensor, include a CHECK constraint to ensureSensorNois zeroThe existence of
AcknowledgementsandActionsare constrained by the identified FK CONSTRAINTS (AnAcknowledgementcannot exist without anAlert; anActioncannot exist without anAcknowledgement). Conversely, anAlertwith noAcknowledgementis in an unacknowledged state; anAlertwith andAcknowledgementbut noActionis in an acknowledged but un-actioned state. .
Alerts. The concept in a design for this kind of (live monitoring and alert) application is many small programs, running independently; all using the database as the single version of the truth. Some programs insert rows (
Readings, Alerts); other programs poll the db for existence of such rows (and send SMS messages, etc; or hand-held units pick up Alerts relevant to the unit only). In that sense, the db is a may be described as an message box (one program puts rows in, which another program reads and actions).The assumption is,
ReadingsforSensorsare being recorded "live" by theNetworkSlave, and every minute or so, a new set ofReadingsis inserted. A background process executes periodically (every minute or whatever), this is the main "monitor" program, it will have many functions within its loop. One such function will be to monitorReadingsand produceAlertsthat have occurred since the last iteration (of the program loop).The following code segment will be executed within the loop, one for each AlertType. It is a classic Projection:
So an-- Assume @LoopDateTime contains the DateTime of the last iteration INSERT Alert SELECT LocationId, SensorNo, ReadingDtm, "L" -- AlertType "Low" FROM Sensor s, Reading r WHERE s.LocationId = r.LocationId AND s.SensorNo = r.SensorNo AND r.ReadingDtm > @LoopDtm AND r.Value < s.LowerLimit INSERT Alert SELECT LocationId, SensorNo, ReadingDtm, "H" -- AlertType "High" FROM Sensor s, Reading r WHERE s.LocationId = r.LocationId AND s.SensorNo = r.SensorNo AND r.ReadingDtm > @LoopDtm AND r.Value > s.UpperLimitAlertis definitely a fact, that exists as a row in the database. Subsequently that may beAcknowledgedby anUser(another row/fact), andActionedwith anActionTypeby anUser.Other that this (the creation by Projection act), ie. the general and unvarying case, I would refer to
Alertonly as a row inAlert; a static object after creation.Concerns re Changing
Users. That is taken care of already, as follows. At the top of my (revised yesterday) Answer, I state that the major Identifying elements are static. I have re-sequenced the Business Rules to improve clarity.For the reasons you mention,
User.Nameis not a good PK forUser, although it remains an Alternate Key (Unique) and the one that is used for human interaction.User.Namecannot be duplicated, there cannot be more than oneFred; there can be in terms ofFirstName-LastName; twoFred Bloggs, but not in terms ofUser.Name. Our second Fred needs to choose anotherUser.Name. Note the identified Indices.UserIdis the permanent record, and it is already the PK. Never deleteUser, it has historical significance. In fact the FK constraints will stop you (never use CASCADE in a real database, that is pure insanity). No need for code or triggers, etc.Alternately (to delete
Userswho never did anything, and thus releaseUser.Namefor use) allow Delete as long as there are no FK violations (ie.UserIdis not referenced inDownload, Acknowledgement, Action).
To ensure that only
Userswho are Current performActions, add anIsObsoleteboolean in User (DM Updated), and check that column when that table is interrogated for any function (except reports) You can implement a ViewUserCurrentwhich returns only thoseUsers.Same goes for
LocationandNetworkSlave. If you need to differentiate current vs historical, let me know, I will addIsObsoleteto them as well.I don't know: you may purge the database of ancient Historical data periodically, delete rows that are (eg) over 10 years old. That has to be done from the bottom (tables) first, working up the Relations.
Feel free to ask Questions.
Note the IDEF1 Notation document has been expanded.
回答2:
Here are my two cents on the problem.
AlertType table holds all possible types of alerts. AlertName may be something like high temperate, low pressure, low water level, etc.
AlertSetup table allows for setup of alert thresholds from a sensor for a specific alert type.
For example, TresholdLevel = 100 and TresholdType = 'HI' should trigger alert for readings over 100.
Reading table holds sensor readings as they are streamed into the server (application).
Alert table holds all alerts. It keeps links to the first reading that triggered the alert and the last one that finished it (FirstReadingId, LastReadingId). IsActive is true if there is an active alert for the (SensorId, AlertTypeId) combination. IsActive can be set to false only by reading going below the alert threshold. IsAcknowledged means that an operator has acknowledged the alert.
The application layer inserts the new reading into the Reading table, captures the
ReadingId.Then application checks the reading against alert setups for each (
SensorId,AlertTypeId) combination. At this point a collection of objects{SensorId, AlertTypeId, ReadingId, IsAlert}is created and theIsAlertflag is set for each object.The Alert table is then checked for active alerts for each object
{SensorId, AlertTypeId, ReadingId, IsAlert}from the collection.If the
IsAlertis TRUE and there are no active alerts for the (SensorId,AlertTypeId) combination, a new row is added to the Alert table with theFirstReadingIDpointing to the currentReadingId. TheIsActiveis set to TRUE, theIsAcknowledgedto FALSE.If the
IsAlertis TRUE and there is an active alert for the (SensorId,AlertTypeId) combination, that row is updated by setting theLastReadingIDpointing to the currentReadingId.If the
IsAlertis FALSE and there is an active alert for the (SensorId,AlertTypeId) combination, that row is updated by setting theIsActiveFALSE.If the
IsAlertis FALSE and there are no active alerts for the (SensorId,AlertTypeId) combination, the Alert table is not modified.
回答3:
The main "triangle" you have to deal with here is Sensor, [Sensor]Reading, and Alert. Presuming you have to track activity as it is occuring (as opposed to a "load it all at once" design), your third solution is similar to something we did recently. A few tweaks and it would look like:
[Location]
LocationId
[Sensor]
SensorId
LocationId
CurrentSensorState -- Denormalized data!
[SensorReading]
SensorReadingId
SensorState
Value
Timestamp
[SensorStateLog]
SensorId
Timestamp
SensorState
Status -- Does what?
IsInAlert
(Primary key is {SensorId, Timestamp})
"SensorState" could be SensorStateId, with an associated lookup table listing (and constraining) all possible states.
The idea is, you Sensor contains one row per sensor and shows it's current state. SensorReading is updated continuously with sensor readings. If and when a given sensors current state changes (i.e. new Reading's state differs from Sensor's current state), you change the current state and add a row to the SensorStateLog showing the change in state. (Optionally, you could update the "prior" entry for that sensor with a "state ended" timestamp, but that's fussy code to write.)
CurrentSensorState in the Sensor table is denormalized data, but if properly maintained (and if you have millions of rows) it will make querying current state vastly more efficient and so may be worth the effort.
The obvious downside of all this is that Alerts are no longer an entity, and they become that much harder to track and identify. If these must be readily and immediately identifiable and usable, your third scheme won't do what you need it to do.
来源:https://stackoverflow.com/questions/4335189/opinions-on-sensor-reading-alert-database-design