When is multiple validation layers of protection necessary?

I'm having a hard time of understanding at what point is multiple layers of validation protection necessary rather than a single point of failure and if the performance hit is a concern Lets say you have an events table in SQL that contains date ranges for an event and an entries table that has entries that fit within the range of a certain event events table id: id UUID PRIMARY KEY DEFAULT uuid_generate_v4(), user_id UUID REFERENCES events(id) NOT NULL, start_date DATE NOT NULL, end_date DATE NOT NULL entries table id: id UUID PRIMARY KEY DEFAULT uuid_generate_v4(), event_id UUID REFERENCES events(id) NOT NULL, user_id UUID REFERENCES users(id) NOT NULL, date DATE NOT NULL You have a situation where a user could attempt to place an item with a date in the entries table that is out of range of the start date and end date of the event. Let's say for practical purposes, it would not affect the client side application but obviously you don't know for sure if it could cause bugs or a security issue later on. While you can validate with a backend validation library, you can't know if it's in the range of start_date and end_date in the events table because its not sent with the request. Even if you sent those parameters in the request, the data could be inaccurate that those dates are actually start date and end date. To protect against this, after the backend validation library process you could query the events table to check that the date is in range of dates at the cost of every entry creation or update having to perform an additional query. Alternatively, I suppose you could try to place some check at the database level with a database trigger or some other database mechanism. How do you know if you should implement one of these protections and if so which one (or both?) or something else entirely? Alternatively, you could just do no additional checks and let the data though something tells me that's not a good idea. Is the performance hit of having to perform these "checks" a concern especially if relying on redundant protections as it makes me question if this validation process is correct in the first place.

Apr 4, 2025 - 22:33
 0
When is multiple validation layers of protection necessary?

I'm having a hard time of understanding at what point is multiple layers of validation protection necessary rather than a single point of failure and if the performance hit is a concern

Lets say you have an events table in SQL that contains date ranges for an event and an entries table that has entries that fit within the range of a certain event

events table

id: id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
user_id UUID REFERENCES events(id) NOT NULL,
start_date DATE NOT NULL,
end_date DATE NOT NULL

entries table

id: id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
event_id UUID REFERENCES events(id) NOT NULL,
user_id UUID REFERENCES users(id) NOT NULL,
date DATE NOT NULL

You have a situation where a user could attempt to place an item with a date in the entries table that is out of range of the start date and end date of the event. Let's say for practical purposes, it would not affect the client side application but obviously you don't know for sure if it could cause bugs or a security issue later on.

While you can validate with a backend validation library, you can't know if it's in the range of start_date and end_date in the events table because its not sent with the request. Even if you sent those parameters in the request, the data could be inaccurate that those dates are actually start date and end date.

To protect against this, after the backend validation library process you could query the events table to check that the date is in range of dates at the cost of every entry creation or update having to perform an additional query. Alternatively, I suppose you could try to place some check at the database level with a database trigger or some other database mechanism.

How do you know if you should implement one of these protections and if so which one (or both?) or something else entirely? Alternatively, you could just do no additional checks and let the data though something tells me that's not a good idea. Is the performance hit of having to perform these "checks" a concern especially if relying on redundant protections as it makes me question if this validation process is correct in the first place.