How to handle overfetching efficiently with repository pattern in large applications?
I'm working on a large Typescript project in NodeJS with Prisma, where we have dozens of domain entities—some with 30+ fields and complex relations. We're using the repository pattern to abstract data access. The challenge we're facing is how to avoid overfetching data when different use cases require different slices of the same entity. Problem For example, consider a "Shipment" entity with 30+ fields and some relations: In one use case, I only need 5 fields and a few related fields. In another use case, I need the full entity including its relations. To handle this, we’ve had to create dozens of specific repository methods for different permutations of these fetch requirements. This feels unsustainable as the app grows, because we could end up creating hundreds or even thousands of these methods. What we've considered Creating new repository methods per use case (leads to method explosion). Creating a new abstraction for granular field selection/omission and relation handling - I gave this a go and it got very complex very fast. Nested selections and typescript complicate things, and I'd practically be recreating the existing ORM selection abstractions. Always fetching full entities (leads to overfetching). Considering dropping the repository abstraction and calling Prisma directly, but this makes refactoring hard because every small schema change could affect 1000+ direct usages. Question: How do large codebases (particularly those written in TS) manage this kind of granularity in data fetching? Is overfetching just accepted? Is it reasonable to abandon the repository pattern in such scenarios? Any insights from teams that have scaled this would be really helpful.
I'm working on a large Typescript project in NodeJS with Prisma, where we have dozens of domain entities—some with 30+ fields and complex relations.
We're using the repository pattern to abstract data access. The challenge we're facing is how to avoid overfetching data when different use cases require different slices of the same entity.
Problem
For example, consider a "Shipment" entity with 30+ fields and some relations:
In one use case, I only need 5 fields and a few related fields.
In another use case, I need the full entity including its relations.
To handle this, we’ve had to create dozens of specific repository methods for different permutations of these fetch requirements.
This feels unsustainable as the app grows, because we could end up creating hundreds or even thousands of these methods.
What we've considered
Creating new repository methods per use case (leads to method explosion).
Creating a new abstraction for granular field selection/omission and relation handling - I gave this a go and it got very complex very fast. Nested selections and typescript complicate things, and I'd practically be recreating the existing ORM selection abstractions.
Always fetching full entities (leads to overfetching).
Considering dropping the repository abstraction and calling Prisma directly, but this makes refactoring hard because every small schema change could affect 1000+ direct usages.
Question:
How do large codebases (particularly those written in TS) manage this kind of granularity in data fetching?
Is overfetching just accepted?
Is it reasonable to abandon the repository pattern in such scenarios?
Any insights from teams that have scaled this would be really helpful.