Back to our allocations project!
In this chapter, we’ll discuss the difference between orchestration logic, business logic, and interfacing code, and we introduce the Service Layer pattern to take care of orchestrating our workflows and defining the use cases of our system.
We’ll also discuss testing: by combining the Service Layer with our Repository abstraction over the database, we’re able to write fast tests, not just of our domain model, but the entire workflow for a use case.
By the end of this chapter, we’ll have added a Flask API, that will talk to
the Service Layer, which will serve as the entrypoint to our Domain Model.
By making the service layer depend on the AbstractRepository
, we’ll be
able to unit test it using FakeRepository
, and then run it in real life
using SqlAlchemyRepository
. Our target architecture for the end of this chapter is a class
diagram showing where we’re heading.
[plantuml, chapter_03_class_diagram] @startuml package api { class Flask { allocate_endpoint() } } package sqlalchemy { class Session { query() add() } } package allocation { class services { allocate(line, repository, session) } abstract class AbstractRepository { add () get () list () } class Batch { allocate () } class FakeRepository { batches: List<Batch> } class BatchRepository { session: Session } } services -> AbstractRepository: uses AbstractRepository -> Batch : stores AbstractRepository <|-- FakeRepository : implements AbstractRepository <|-- BatchRepository : implements Flask --> services : invokes BatchRepository ---> Session : abstracts @enduml
Like any good agile team, we’re hustling to try and get an MVP out and in front of the users to start gathering feedback. We have the core of our domain model and the domain service we need to allocate orders, and we have the Repository interface for permanent storage.
Let’s try and plug all the moving parts together as quickly as we can, and then refactor towards a cleaner architecture. Here’s our plan:
-
Use Flask to put an API endpoint in front of our
allocate
domain service. Wire up the database session and our repository. Test it with an end-to-end test and some quick and dirty SQL to prepare test data. -
Refactor out a Service Layer to serve as an abstraction to capture the use case, and sit between Flask and our Domain Model. Build some service-layer tests and show how they can use the
FakeRepository
. -
Experiment with different types of parameters for our service layer functions; show that using primitive data types allows the service-layer’s clients (our tests and our flask API) to be decoupled from the model layer.
-
Add an extra service called
add_stock
so that our service-layer tests and end-to-end tests no longer need to go directly to the storage layer to set up test data.
No-one is interested in getting into a long terminology debate about what counts as an E2E test vs a functional test vs an acceptance test vs an integration test vs unit tests. Different projects need different combinations of tests, and we’ve seen perfectly successful projects just split things into "fast tests" and "slow tests."
For now we want to write one or maybe two tests that are going to exercise a "real" API endpoint (using HTTP) and talk to a real database. Let’s call them end-to-end tests because it’s one of the most self-explanatory names.
A first API test (test_api.py) shows a first cut:
@pytest.mark.usefixtures('restart_api')
def test_api_returns_allocation(add_stock):
sku, othersku = random_sku(), random_sku('other') #(1)
batch1, batch2, batch3 = random_batchref(1), random_batchref(2), random_batchref(3)
add_stock([ #(2)
(batch1, sku, 100, '2011-01-02'),
(batch2, sku, 100, '2011-01-01'),
(batch3, othersku, 100, None),
])
data = {'orderid': random_orderid(), 'sku': sku, 'qty': 3}
url = config.get_api_url() #(3)
r = requests.post(f'{url}/allocate', json=data)
assert r.status_code == 201
assert r.json()['batchref'] == batch2
-
random_sku()
,random_batchref()
etc are little helper functions that add generate some randomised characters using theuuid
module. Because we’re running against an actual database now, this is one way to prevent different tests and runs from interfering with each other. -
add_stock
is a helper fixture that just hides away the details of manually inserting rows into the database using SQL. We’ll find a nicer way of doing this later in the chapter. -
config.py is a module for getting configuration information. Again, this is an unimportant detail, and everyone has different ways of solving these problems, but if you’re curious, you can find out more in [appendix_project_structure].
Everyone solves these problems in different ways, but you’re going to need some way of spinning up Flask, possibly in a container, and also talking to a postgres database. If you want to see how we did it, check out [appendix_project_structure].
Implementing things in the most obvious way, you might get something like this:
from flask import Flask, jsonify, request
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
import config
import model
import orm
import repository
orm.start_mappers()
get_session = sessionmaker(bind=create_engine(config.get_postgres_uri()))
app = Flask(__name__)
@app.route("/allocate", methods=['POST'])
def allocate_endpoint():
session = get_session()
batches = repository.SqlAlchemyRepository(session).list()
line = model.OrderLine(
request.json['orderid'],
request.json['sku'],
request.json['qty'],
)
batchref = model.allocate(line, batches)
return jsonify({'batchref': batchref}), 201
So far so good. No need for too much more of your "architecture astronaut" nonsense, Bob and Harry, you may be thinking.
But hang on a minute — there’s no commit. We’re not actually saving our allocation to the database. Now we need a second test, either one that will inspect the database state after (not very black-boxey), or maybe one that checks we can’t allocate a second line if a first should have already depleted the batch:
@pytest.mark.usefixtures('restart_api')
def test_allocations_are_persisted(add_stock):
sku = random_sku()
batch1, batch2 = random_batchref(1), random_batchref(2)
order1, order2 = random_orderid(1), random_orderid(2)
add_stock([
(batch1, sku, 10, '2011-01-01'),
(batch2, sku, 10, '2011-01-02'),
])
line1 = {'orderid': order1, 'sku': sku, 'qty': 10}
line2 = {'orderid': order2, 'sku': sku, 'qty': 10}
url = config.get_api_url()
# first order uses up all stock in batch 1
r = requests.post(f'{url}/allocate', json=line1)
assert r.status_code == 201
assert r.json()['batchref'] == batch1
# second order should go to batch 2
r = requests.post(f'{url}/allocate', json=line2)
assert r.status_code == 201
assert r.json()['batchref'] == batch2
Not quite so lovely, but that will force us to get a commit in.
If we keep going like this though, things are going to get uglier and uglier.
Supposing we want to add a bit of error-handling. What if the domain raises an error, for a sku that’s out of stock? Or what about a sku that doesn’t even exist? That’s not something the domain even knows about, nor should it. It’s more of a sanity-check that we should implement at the database layer, before we even invoke the domain service.
Now we’re looking at two more end-to-end tests:
@pytest.mark.usefixtures('restart_api')
def test_400_message_for_out_of_stock(add_stock): #(1)
sku, smalL_batch, large_order = random_sku(), random_batchref(), random_orderid()
add_stock([
(smalL_batch, sku, 10, '2011-01-01'),
])
data = {'orderid': large_order, 'sku': sku, 'qty': 20}
url = config.get_api_url()
r = requests.post(f'{url}/allocate', json=data)
assert r.status_code == 400
assert r.json()['message'] == f'Out of stock for sku {sku}'
@pytest.mark.usefixtures('restart_api')
def test_400_message_for_invalid_sku(): #(2)
unknown_sku, orderid = random_sku(), random_orderid()
data = {'orderid': orderid, 'sku': unknown_sku, 'qty': 20}
url = config.get_api_url()
r = requests.post(f'{url}/allocate', json=data)
assert r.status_code == 400
assert r.json()['message'] == f'Invalid sku {unknown_sku}'
-
In the first test we’re trying to allocate more units than we have in stock
-
In the second, the sku just doesn’t exist (because we never called
add_stock
), so it’s invalid as far as our app is concerned.
And, sure we could implement it in the Flask app too:
def is_valid_sku(sku, batches):
return sku in {b.sku for b in batches}
@app.route("/allocate", methods=['POST'])
def allocate_endpoint():
session = get_session()
batches = repository.SqlAlchemyRepository(session).list()
line = model.OrderLine(
request.json['orderid'],
request.json['sku'],
request.json['qty'],
)
if not is_valid_sku(line.sku, batches):
return jsonify({'message': f'Invalid sku {line.sku}'}), 400
try:
batchref = model.allocate(line, batches)
except model.OutOfStock as e:
return jsonify({'message': str(e)}), 400
session.commit()
return jsonify({'batchref': batchref}), 201
But our Flask app is starting to look a bit unwieldy. And our number of E2E tests is starting to get out of control, and soon we’ll end up with an inverted test pyramid (or "ice cream cone model" as Bob likes to call it).
If we look at what our Flask app is doing, there’s quite a lot of what we might call "orchestration" — fetching stuff out of our repository, validating our input against database state, handling errors, and committing in the happy path. Most of these things aren’t anything to do with having a web API endpoint (you’d need them if you were building a CLI for example, see [appendix_csvs]), and they’re not really things that need to be tested by end-to-end tests.
It often makes sense to split out a Service Layer, sometimes called orchestration layer or use case layer.
Do you remember the FakeRepository
that we prepared in the last chapter?
class FakeRepository(repository.AbstractRepository):
def __init__(self, batches):
self._batches = set(batches)
def add(self, batch):
self._batches.add(batch)
def get(self, reference):
return next(b for b in self._batches if b.reference == reference)
def list(self):
return list(self._batches)
Here’s where it will come in useful; it lets us test our service layer with nice, fast unit tests:
def test_returns_allocation():
line = model.OrderLine("o1", "COMPLICATED-LAMP", 10)
batch = model.Batch("b1", "COMPLICATED-LAMP", 100, eta=None)
repo = FakeRepository([batch]) #(1)
result = services.allocate(line, repo, FakeSession()) #(2)(3)
assert result == "b1"
def test_error_for_invalid_sku():
line = model.OrderLine("o1", "NONEXISTENTSKU", 10)
batch = model.Batch("b1", "AREALSKU", 100, eta=None)
repo = FakeRepository([batch]) #(1)
with pytest.raises(services.InvalidSku, match="Invalid sku NONEXISTENTSKU"):
services.allocate(line, repo, FakeSession()) #(2)(3)
-
FakeRepository
(code below) holds theBatch
objects that will be used by our test. -
Our services module (services.py) will define an
allocate()
function. It will sit between ourallocate_endpoint()
in the API layer and theallocate()
domain service from our domain model. -
We also need a
FakeSession
to fake out the database session, see below:
class FakeSession():
committed = False
def commit(self):
self.committed = True
(The fake session is only a temporary solution. We’ll get rid of it and make things even nicer in the next chapter, [chapter_05_uow])
Couldn’t we have used a mock (from unittest.mock
) instead of building our
own FakeSession
, or instead of FakeRepository
? What’s the difference
between a fake and a mock anyway?
We tend to find that building our own fakes is an excellent way of exercising design pressure against our abstractions. If our abstractions are nice and simple, then they should be easy to fake.
In fact in the case of FakeRepository
, because our fake has actual behavior,
using a magic mock from unittest.mock
wouldn’t really help.
In the case of FakeSession
, the session
object isn’t one of our own
abstractions, so the argument doesn’t apply; in fact, a unittest.mock
mock
would have been just fine, but out of habit we avoided using one; in any case,
we’ll be getting rid of it in the next chapter.
In general we try and avoid using mocks, and the associated mock.patch
.
Whenever we find ourselves reaching for them, we often see it as an indication
that something is missing from our design. You’ll see a good example of that
in [chapter_07_events_and_message_bus] when we mock out an email-sending
module, but eventually we replace it with an explicit bit of dependency injection.
That’s discussed in [chapter_12_dependency_injection].
Regarding the definition of fakes vs mocks, the short but simplistic answer is:
-
Mocks are used to verify how something gets used; they have methods like
assert_called_once_with()
. They’re associated with London-school TDD. -
Fakes are working implementations of the thing they’re replacing, but they’re only designed for use in tests; they wouldn’t work "in real life", like our in-memory repository. But you can use them to make assertions about the end state of a system, rather than the behaviors along the way, so they’re associated with classic-style TDD.
(We’re slightly conflating mocks with spies and fakes with stubs here, and you can read the long, correct answer in Martin Fowler’s classic essay on the subject called Mocks aren’t Stubs)
(It also probably doesn’t help that the MagicMock
objects provided by
unittest.mock
aren’t, strictly speaking, mocks, they’re spies if anything.
But they’re also often used as stubs or dummies. There, promise we’re done with
the test double terminology nitpicks now.)
What about London-school vs classic-style TDD? You can read more about those two in Martin Fowler’s article just cited, as well as on stackoverflow, but in this book we’re pretty firmly in the classicist camp. We like to build our tests around state, both in setup and assertions, and we like to work at the highest level of abstraction possible rather than doing checks on the behavior of intermediary collaborators.[1].
Read more on this shortly, in the "high gear vs low gear" section.
The fake .commit()
lets us migrate a third test from the E2E layer:
def test_commits():
line = model.OrderLine('o1', 'OMINOUS-MIRROR', 10)
batch = model.Batch('b1', 'OMINOUS-MIRROR', 100, eta=None)
repo = FakeRepository([batch])
session = FakeSession()
services.allocate(line, repo, session)
assert session.committed is True
We’ll get to a service function that looks something like Basic allocation service (services.py):
class InvalidSku(Exception):
pass
def is_valid_sku(sku, batches): #(2)
return sku in {b.sku for b in batches}
def allocate(line: OrderLine, repo: AbstractRepository, session) -> str:
batches = repo.list() #(1)
if not is_valid_sku(line.sku, batches): #(2)
raise InvalidSku(f'Invalid sku {line.sku}')
batchref = model.allocate(line, batches) #(3)
session.commit() #(4)
return batchref
Typical service-layer functions have similar steps:
-
We fetch some objects from the repository
-
We make some checks or assertions about the request against the current state of the world
-
We call a domain service
-
And if all is well, we save/update any state we’ve changed.
That last step is a little unsatisfactory at the moment, our services layer is tightly coupled to our database layer, but we’ll improve on that in the next chapter.
Notice one more thing about our service-layer function:
def allocate(line: OrderLine, repo: AbstractRepository, session) -> str: #(1)
It depends on a repository. We’ve chosen to make the dependency explicit,
and we’ve used the type hint to say that we depend on AbstractRepository
[2]
This means it’ll work both when the tests give it a FakeRepository
, and
when the flask app gives it a SqlAlchemyRepository
.
If you remember the Dependency Inversion Principle section from the introduction, this is what we mean when we says we should "depend on abstractions". Our high-level module, the service layer, depends on the repository abstraction. And the details of the implementation for our specific choice of persistent storage also depend on that same abstraction.
See the diagram at the end of the chapter, [service_layer_diagram_abstract].
See also [appendix_csvs] where we show a worked example of swapping out the details of which persistent storage system to use, while leaving the abstractions intact.
Still, the essentials of the services layer are there, and our Flask app now looks a lot cleaner, Flask app delegating to service layer (flask_app.py):
@app.route("/allocate", methods=['POST'])
def allocate_endpoint():
session = get_session() #(1)
repo = repository.SqlAlchemyRepository(session) #(1)
line = model.OrderLine(
request.json['orderid'], #(2)
request.json['sku'], #(2)
request.json['qty'], #(2)
)
try:
batchref = services.allocate(line, repo, session) #(2)
except (model.OutOfStock, services.InvalidSku) as e:
return jsonify({'message': str(e)}), 400 (3)
return jsonify({'batchref': batchref}), 201 (3)
We see that the responsibilities of the Flask app are much more minimal, and more focused on just the web stuff:
-
We instantiate a database session and some repository objects.
-
We extract the user’s commands from the web request and pass them to a domain service.
-
And we return some JSON responses with the appropriate status codes
The responsibilities of the Flask app are just standard web stuff: per-request session management, parsing information out of POST parameters, response status codes and JSON. All the orchestration logic is in the use case / service layer, and the domain logic stays in the domain.
Finally we can confidently strip down our E2E tests to just two, one for the happy path and one for the unhappy path:
@pytest.mark.usefixtures('restart_api')
def test_happy_path_returns_201_and_allocated_batch(add_stock):
sku, othersku = random_sku(), random_sku('other')
batch1, batch2, batch3 = random_batchref(1), random_batchref(2), random_batchref(3)
add_stock([
(batch1, sku, 100, '2011-01-02'),
(batch2, sku, 100, '2011-01-01'),
(batch3, othersku, 100, None),
])
data = {'orderid': random_orderid(), 'sku': sku, 'qty': 3}
url = config.get_api_url()
r = requests.post(f'{url}/allocate', json=data)
assert r.status_code == 201
assert r.json()['batchref'] == batch2
@pytest.mark.usefixtures('restart_api')
def test_unhappy_path_returns_400_and_error_message():
unknown_sku, orderid = random_sku(), random_orderid()
data = {'orderid': orderid, 'sku': unknown_sku, 'qty': 20}
url = config.get_api_url()
r = requests.post(f'{url}/allocate', json=data)
assert r.status_code == 400
assert r.json()['message'] == f'Invalid sku {unknown_sku}'
We’ve successfully split our tests into two broad categories: tests about web stuff, which we implement end-to-end; and tests about orchestration stuff, which we can test against the service layer in memory.
Let’s see what this move to using a Service Layer, with its own service-layer tests, does to our test pyramid:
$ grep -c test_ test_*.py
test_allocate.py:4
test_batches.py:8
test_services.py:3
test_orm.py:6
test_repository.py:2
test_api.py:2
Not bad! 15 unit tests, 8 integration tests, and just 2 end-to-end tests. That’s a healthy-looking test pyramid.
We could take this a step further. Since we can test the our software against the service layer, we don’t really need tests for the domain model any more. Instead, we could rewrite all of the domain-level tests from chapter one in terms of the service layer.
# domain-layer test:
def test_prefers_current_stock_batches_to_shipments():
in_stock_batch = Batch("in-stock-batch", "RETRO-CLOCK", 100, eta=None)
shipment_batch = Batch("shipment-batch", "RETRO-CLOCK", 100, eta=tomorrow)
line = OrderLine("oref", "RETRO-CLOCK", 10)
allocate(line, [in_stock_batch, shipment_batch])
assert in_stock_batch.available_quantity == 90
assert shipment_batch.available_quantity == 100
# service-layer test:
def test_prefers_warehouse_batches_to_shipments():
in_stock_batch = Batch("in-stock-batch", "RETRO-CLOCK", 100, eta=None)
shipment_batch = Batch("shipment-batch", "RETRO-CLOCK", 100, eta=tomorrow)
repo = FakeRepository([warehouse_batch, shipment_batch])
session = FakeSession()
line = OrderLine('oref', "RETRO-CLOCK", 10)
services.allocate(line, repo, session)
assert warehouse_batch.available_quantity == 90
Why would we want to do that?
Tests are supposed to help us change our system fearlessly, but very often we see teams writing too many tests against their domain model. This causes problems when they come to change their codebase, and find that they need to update tens or even hundreds of unit tests.
This makes sense if you stop to think about the purpose of automated tests. We use tests to enforce that some property of the system doesn’t change while we’re working. We use tests to check that the API continues to return 200, that the database session continues to commit, and that orders are still being allocated.
If we accidentally change one of those behaviors, our tests will break. The flip side, though, is that if we want to change the design of our code, any tests relying directly on that code will also fail.
Every line of code that we put in a test is like a blob of glue, holding the system in a particular shape.
As we get further into the book, we’ll see how the service layer forms an API for our system that we can drive in multiple ways. Testing against this API reduces the amount of code that we need to change when we refactor our domain model. If we restricting ourselves to only testing against the service layer, we won’t have any tests that directly interact with "private" methods or attributes on our model objects, which leaves us more free to refactor them.
You might be asking yourself "should I rewrite all my unit tests, then? Is it wrong to write tests against the domain model?" To answer the question, it’s important to understand the trade-off between coupling and design feedback (see The test spectrum.)
[ditaa, test_spectrum_diagram] | Low feedback High feedback | | Low barrier to change High barrier to change| | High system coverage Focused coverage | | | | <--------- ----------> | | API tests service-layer tests domain tests |
Extreme Programming (XP) exhorts us to "listen to the code." When we’re writing tests, we might find that the code is hard to use, or notice a code smell. This is a trigger for us to refactor, and reconsider our design.
We only get that feedback, though, when we’re working closely with the target code. A test for the HTTP API tells us nothing about the fine-grained design of our objects, because it sits at a much higher level of abstraction.
On the other hand, we can rewrite our entire application and, so long as we don’t change the URLs or request formats, our http tests will continue to pass. This gives us confidence that large-scale changes, like changing the DB schema, haven’t broken our code.
At the other end of the spectrum, the tests we wrote in chapter 1 helped us to flesh out our understanding of the objects we need. The tests guided us to a design that makes sense and reads in the domain language. When our tests read in the domain language, we feel comfortable that our code matches our intuition about the problem we’re trying to solve.
Because the tests are written in the domain language, they act as living documentation for our model. A new team member can read these tests to quickly understand how the system works, and how the core concepts interrelate.
We often "sketch" new behaviors by writing tests at this level to see how the code might look.
When we want to improve the design of the code, though, we will need to replace or delete these tests, because they are tightly coupled to a particular implementation.
Most of the time, when we are adding a new feature, or fixing a bug, we don’t need to make extensive changes to the domain model. In these cases, we prefer to write tests against services for the lower-coupling and high-coverage.
For example, when writing an add_stock
function, or a cancel_order
feature,
we can work more quickly and with less coupling by writing tests against the
service layer.
When starting out a new project, or when we hit a particularly gnarly problem, we will drop back down to writing tests against the domain model, so that we get better feedback and executable documentation of our intent.
The metaphor we use is that of shifting gears. When starting off a journey, the bicycle needs to be in a low gear so that it can overcome inertia. Once we’re off and running, we can go faster and more efficiently by changing into a high gear; but if we suddenly encounter a steep hill, or we’re forced to slow down by a hazard, we again drop down to a low gear until we can pick up speed again.
-
Write one end-to-end test per feature[3] to demonstrate that the feature exists and is working. This might be written against an HTTP api. These tests cover an entire feature at a time.
-
Write the bulk of the tests for your system against the service layer. This offers a good trade-off between coverage, run-time, and efficiency. These tests tend to cover one code path of a feature and use fakes for IO.
-
Maintain a small core of tests written against your domain model. These tests have highly-focused coverage, and are more brittle, but have the highest feedback. Don’t be afraid to delete these tests if the functionality is later covered by tests at the service layer.
We still have some direct dependencies on the domain in our service-layer tests, because we use domain objects to set up our test data and to invoke our service-layer functions.
To have a service layer that’s fully decoupled from the domain, we need to rewrite its API to work in terms of primitives.
Our service layer currently takes an OrderLine
domain object:
def allocate(line: OrderLine, repo: AbstractRepository, session) -> str:
How would it look if its parameters were all primitive types?
def allocate(
orderid: str, sku: str, qty: int, repo: AbstractRepository, session
) -> str:
We rewrite the tests in those terms as well:
def test_returns_allocation():
batch = model.Batch("batch1", "COMPLICATED-LAMP", 100, eta=None)
repo = FakeRepository([batch])
result = services.allocate("o1", "COMPLICATED-LAMP", 10, repo, FakeSession())
assert result == "batch1"
But our tests still depend on the domain, because we still manually instantiate
Batch
objects. So if, one day, we decide to massively refactor how our Batch
model works, we’ll have to change a bunch of tests.
We could at least abstract that out to a helper function or a fixture
in our tests. Here’s one way you could do that, adding a factory
function on FakeRepository
:
class FakeRepository(set):
@staticmethod
def for_batch(ref, sku, qty, eta=None):
return FakeRepository([
model.Batch(ref, sku, qty, eta),
])
...
def test_returns_allocation():
repo = FakeRepository.for_batch("batch1", "COMPLICATED-LAMP", 100, eta=None)
result = services.allocate("o1", "COMPLICATED-LAMP", 10, repo, FakeSession())
assert result == "batch1"
At least that would move all of our tests' dependencies on the domain into one place.
We could go one step further though. If we had a service to add stock, then we could use that, and make our service-layer tests fully expressed in terms of the service layer’s official use cases, removing all dependencies on the domain:
def test_add_batch():
repo, session = FakeRepository([]), FakeSession()
services.add_batch("b1", "CRUNCHY-ARMCHAIR", 100, None, repo, session)
assert repo.get("b1") is not None
assert session.committed
And the implementation is just two lines
def add_batch(
ref: str, sku: str, qty: int, eta: Optional[date],
repo: AbstractRepository, session,
):
repo.add(model.Batch(ref, sku, qty, eta))
session.commit()
def allocate(
orderid: str, sku: str, qty: int, repo: AbstractRepository, session
) -> str:
...
Note
|
Should you write a new service just because it would help remove dependencies from your tests? Probably not. But in this case, we almost definitely would need an add_batch service one day anyway. |
Tip
|
In general, if you find yourself needing to do domain-layer stuff directly in your service-layer tests, it may be an indication that your service layer is incomplete. |
That now allows us to rewrite all of our service-layer tests purely in terms of the services themselves, using only primitives, and without any dependencies on the model.
def test_allocate_returns_allocation():
repo, session = FakeRepository([]), FakeSession()
services.add_batch("batch1", "COMPLICATED-LAMP", 100, None, repo, session)
result = services.allocate("o1", "COMPLICATED-LAMP", 10, repo, session)
assert result == "batch1"
def test_allocate_errors_for_invalid_sku():
repo, session = FakeRepository([]), FakeSession()
services.add_batch("b1", "AREALSKU", 100, None, repo, session)
with pytest.raises(services.InvalidSku, match="Invalid sku NONEXISTENTSKU"):
services.allocate("o1", "NONEXISTENTSKU", 10, repo, FakeSession())
This is a really nice place to be in. Our service-layer tests only depend on the services layer itself, leaving us completely free to refactor the model as we see fit.
In the same way that adding add_batch
helped decouple our services-layer
tests from the model, adding an API endpoint to add a batch would remove
the need for the ugly add_stock
fixture, and our E2E tests can be free
of those hardcoded SQL queries and the direct dependency on the database.
The service function means adding the endpoint is very easy, just a little json-wrangling and a single function call:
@app.route("/add_batch", methods=['POST'])
def add_batch():
session = get_session()
repo = repository.SqlAlchemyRepository(session)
eta = request.json['eta']
if eta is not None:
eta = datetime.fromisoformat(eta).date()
services.add_batch(
request.json['ref'], request.json['sku'], request.json['qty'], eta,
repo, session
)
return 'OK', 201
Note
|
Are you thinking to yourself POST to /add_batch ?? That’s not
very RESTful! You’re quite right. We’re being happily sloppy, but
if you’d like to make it all more RESTey, maybe a POST to /batches ,
then knock yourself out! Because Flask is a thin adapter, it’ll be
easy. See the next sidebar.
|
And our hardcoded SQL queries from conftest.py get replaced with some API calls, meaning the API tests have no dependencies other than the API, which is also very nice:
def post_to_add_batch(ref, sku, qty, eta):
url = config.get_api_url()
r = requests.post(
f'{url}/add_batch',
json={'ref': ref, 'sku': sku, 'qty': qty, 'eta': eta}
)
assert r.status_code == 201
@pytest.mark.usefixtures('postgres_db')
@pytest.mark.usefixtures('restart_api')
def test_happy_path_returns_201_and_allocated_batch():
sku, othersku = random_sku(), random_sku('other')
batch1, batch2, batch3 = random_batchref(1), random_batchref(2), random_batchref(3)
post_to_add_batch(batch1, sku, 100, '2011-01-02')
post_to_add_batch(batch2, sku, 100, '2011-01-01')
post_to_add_batch(batch3, othersku, 100, None)
data = {'orderid': random_orderid(), 'sku': sku, 'qty': 3}
url = config.get_api_url()
r = requests.post(f'{url}/allocate', json=data)
assert r.status_code == 201
assert r.json()['batchref'] == batch2
We’ve now got services for add_batch
and allocate
, why not build out
a service for deallocate
? We’ve added an E2E test and a few stub
service-layer tests for you to get started here:
If that’s not enough, continue into the E2E tests and flask_app.py, and refactor the Flask adapter to be more RESTful. Notice how doing so doesn’t require any change to our service layer or domain layer!
Tip
|
If you decide you want to build a read-only endpoint for retrieving allocation
info, just do the simplest thing that can possibly work ™, which is
repo.get() right in the flask handler. We’ll talk more about reads vs
writes in [chapter_11_cqrs].
|
Adding the service layer has really bought us quite a lot:
-
Our flask API endpoints become very thin and easy to write: their only responsibility is doing "web stuff," things like parsing JSON and producing the right HTTP codes for happy or unhappy cases.
-
We’ve defined a clear API for our domain, a set of use cases or entrypoints that can be used by any adapter without needing to know anything about our domain model classes—whether that’s an API, a CLI (see [appendix_csvs]), or the tests! They’re an adapter for our domain too.
-
We can write tests in "high gear" using the service layer, leaving us free to refactor the domain model in any way we see fit. As long as we can still deliver the same use cases, we can experiment with new designs without needing to rewrite a load of tests.
-
And our "test pyramid" is looking good — the bulk of our tests are fast/unit tests, with just the bare minimum of E2E and integration tests.
Abstract Dependencies of the service layer shows the abstract dependencies of our service layer:
[ditaa, service_layer_diagram_abstract_dependencies] +-----------------------------+ | Service Layer | +-----------------------------+ | | | | depends on abstraction V V +------------------+ +--------------------+ | Domain Model | | AbstractRepository | +------------------+ +--------------------+
When we run the tests, we implement the abstract dependencies using
FakeRepository
, as in Tests provide an implementation of the abstract dependency:
[ditaa, service_layer_diagram_test_dependencies] +-----------------------------+ | Tests |-------------\ +-----------------------------+ | | | V | +-----------------------------+ | | Service Layer | provides | +-----------------------------+ | | | | V V | +------------------+ +--------------------+ | | Domain Model | | AbstractRepository | | +------------------+ +--------------------+ | ^ | implements | | | | +----------------------+ | | FakeRepository |<--/ | (in-memory) | +----------------------+
And when we actually run our app, we swap in the "real" dependency, Dependencies at runtime:
[ditaa, service_layer_diagram_runtime_dependencies] +--------------------------------+ | Flask API (Presentation layer) |-----------\ +--------------------------------+ | | | V | +-----------------------------+ | | Service Layer | | +-----------------------------+ | | | | V V | +------------------+ +--------------------+ | | Domain Model | | AbstractRepository | | +------------------+ +--------------------+ | ^ ^ | | | | | +----------------------+ | imports | | SqlAlchemyRepository |<--/ | +----------------------+ | | uses | V +-----------------------+ | ORM | | (another abstraction) | +-----------------------+ | | talks to V +------------------------+ | Database | +------------------------+
Wonderful. But there’s still a bit of awkwardness we’d like to get rid of. The
service layer is tightly coupled to a session
object. In the next chapter,
we’ll introduce one more pattern that works closely with Repository and
Service Layer, the Unit of Work pattern, and everything will be absolutely
lovely. You’ll see!
Pros | Cons |
---|---|
|
|