Test your Moose code using CIs
You have to test your code!
I mean, really.
But sometimes, testing is hard, because you do not know how to start (often because it was hard to start with TDD or better XtremTDD 😄).
One challenging situation is the creation of mocks to represent real cases and use them as test resources. This situation is common when dealing with code modeling and meta-modeling.
Writing a model manually to test features on it is hard. Today, I'll show you how to use GitHub Actions as well as GitLab CI to create tests for the Moose platform based on real resources.
First of all, let's describe a simple process when working on modeling and meta-modeling.
flowchart LR SourceCode(Source Code) --> Parse --> modelfile(Model File) --> Import --> model(Model in Memory) --> Use
When analyzing a software system using MDE, everything starts with parsing the source code of the application to produce a model. This model can then be stored in a file. Then, we import the file into our analysis environment, and we use the concrete model.
All these steps are performed before using the model.
However, when we create tests for the Use
step, we do not perform all the steps before.
We likely just create a mock model.
Even if this situation is acceptable, it is troublesome because it disconnects the test from the tools (which can have bugs) that create the model.
One solution is thus not to create a mock model, but to create mock source code files.
Proposed approach
Using mock source code files, we can reproduce the process for each test (or better, a group of tests 😉)
flowchart LR SourceCode(Mock Source Code) --> Parse(Parse with Docker) --> modelfile(Model File) --> Import(Import with script) --> model(Model in Memory) --> Test
In the following, I describe the implementation and set-up of the approach for analyzing Java code, using Pharo with Moose. It consists of the following steps:
- Create mock resources
- Create a bridge from your Pharo image to your resources using PharoBridge
- Create a GitLab CI or a GitHub Action
- Test ❤️
Create mock resources
The first step is to create mock resources. To do so, the easiest way is to include them in your git repository.
You should have the following:
1> ci // Code executed by the CI 2> src // Source code files 3> tests // Tests ressources
Inside the tests
folder, it is possible to add several subfolders for different test resources.
Create a Pharo Bridge
To easily use the folder of the test resource repository from Pharo, we will use the GitBridge project.
The project can be added to your Pharo Baseline with the following code fragment:
1spec 2 baseline: 'GitBridge' 3 with: [ spec repository: 'github://jecisc/GitBridge:v1.x.x/src' ].
Then, to connect our Pharo project to the test resources, we create a class in one of our packages, a subclass of `GitBridge``.
A full example would be as follows:
1Class { 2 #name : #MyBridge, 3 #superclass : #GitBridge, 4 #category : #'MyPackage-Bridge' 5} 6 7{ #category : #initialization } 8MyBridge class >> initialize [ 9 10 SessionManager default registerSystemClassNamed: self name 11] 12 13{ #category : #'accessing' } 14MyBridge class >> testsResources [ 15 ^ self root / 'tests' 16]
The method testsResources
can then be used to access the local folder with the test resources.
Warning: this setup only works locally. To use it with GitHub and GitLab, we first have to set up our CI files.
Set up CI files
To set up our CI files, we first create in the ci
folder of our repository a pretesting.st
file that will execute Pharo code.
1(IceRepositoryCreator new 2 location: '.' asFileReference; 3 subdirectory: 'src'; 4 createRepository) register
This code will be run by the CI and register the Pharo project inside the Iceberg tool of Pharo. This registration is then used by GitBridge to retrieve the location of the test resources folder.
Then, we have to update the .smalltalk.ston
file (used by every Smalltalk CI process) and add a reference to our pretesting.st
file.
1SmalltalkCISpec { 2 #preTesting : SCICustomScript { 3 #path : 'ci/pretesting.st' 4 } 5 ... 6}
Set up GitLab CI
The last step for GitLab is the creation of the .gitlab-ci.yml
file.
This CI can include several steps. We now present the steps dedicated to testing the Java model, but the same steps apply to other programming languages.
First, we have to parse the tests-resources using the docker version of VerveineJ
1stages: 2 - parse 3 - tests 4 5parse: 6 stage: parse 7 image: 8 name: badetitou/verveinej:v3.0.0 9 entrypoint: [""] 10 needs: 11 - job: install 12 artifacts: true 13 script: 14 - /VerveineJ-3.0.0/verveinej.sh -Xmx8g -Xms8g -- -format json -o output.json -alllocals -anchor assoc -autocp ./tests/lib ./tests/src 15 artifacts: 16 paths: 17 - output.json
The parse
stage uses the v3
of VerveineJ, parses the code, and produces an output.json
file including the produced model.
Then, we add the common tests
stage of Smalltalk ci.
1tests: 2 stage: tests 3 image: hpiswa/smalltalkci 4 needs: 5 - job: parse 6 artifacts: true 7 script: 8 - smalltalkci -s "Moose64-10"
This stage creates a new Moose64-10
image and performs the CI based on the .smalltalk.ston
configuration file.
Setup GitHub CI
The last step for GitLab is the creation of the .github/workflows/test.yml
file.
In addition to a common smalltalk-ci workflow, we have to configure differently the checkout step, and add a step that parses the code.
For the checkout step, GitBridge (and more specifically Iceberg) needs the history of commits. Thus, we need to configure the checkout actions to fetch the all history.
1- uses: actions/checkout@v3 2 with: 3 fetch-depth: '0'
Then, we can add a step that runs VerveineJ using its docker version.
1- uses: addnab/docker-run-action@v3 2 with: 3 registry: hub.docker.io 4 image: badetitou/verveinej:v3.0.0 5 options: -v ${{ github.workspace }}:/src 6 run: | 7 cd tests 8 /VerveineJ-3.0.0/verveinej.sh -format json -o output.json -alllocals -anchor assoc . 9 cd ..
Note that before running VerveineJ, we change the working directory to the tests folder to better deal with source anchors of Moose.
You can find a full example in the FamixJavaModelUpdater repository
Test
The last step is to adapt your tests to use the model produced from the mock source. To do so, it is possible to remove the creation of the mock model by loading the model.
Here's an example:
1externalFamixClass := FamixJavaClass new 2 name: 'ExternalFamixJavaClass'; 3 yourself. 4externalFamixMethod := FamixJavaMethod new 5 name: 'externalFamixJavaMethod'; 6 yourself. 7externalFamixClass addMethod: externalFamixMethod. 8myClass := FamixJavaClass new 9 name: 'MyClass'; 10 yourself. 11externalFamixMethod declaredType: myClass. 12famixModel addAll: { 13 externalFamixClass. 14 externalFamixMethod. 15 myClass }.
The above can be converted into the following:
1FJMUBridge testsResources / 'output.json' readStreamDo: [ :stream | 2 famixModel importFromJSONStream: stream ]. 3famixModel rootFolder: FJMUBridge testsResources pathString. 4 5externalFamixClass := famixModel allModelClasses detect: [ :c | c name = 'ExternalFamixJavaClass' ]. 6myClass := famixModel allModelClasses detect: [ :c | c name = 'MyClass' ]. 7externalFamixMethod := famixModel allModelMethods detect: [ :c | c name = 'externalFamixJavaMethod' ].
Congrats
You can now test your code on a model generated as a real-world model!
It is clear that this solution slows down tests performance, however. But it ensures that your mock model is well created, because it is created by the parser tool (importer).
A good test practice is thus a mix of both solutions, classic tests in the analysis code, and full scenario tests based on real resources.
Have fun testing your code now!
Thanks C. Fuhrman for the typos fixes. 🍌