Cloud services with Delphi: don’t make these mistakes!

So, you have been given your first project to develop a cloud service with Delphi and want to avoid all pitfalls and loopholes that can come with such project? Or, you are thinking about migrating your existing applications to the cloud, and want to make sure you are going do start it the right away?

UPDATE: The live webinar is gone, sorry if you have missed this one! Attending live is always nice, you can ask questions and get eventual prizes (in this one, we offered a 20% discount coupon). But you can watch the replay here:

Original Post:

Well, you are welcome to attend my next free live webinar, at TMS Software Academy: Cloud services with Delphi: don’t make these mistakes. It will happen on next Wednesday, June 2nd, 3:00 PM UTC

I will talk about what does it mean to “move to the cloud”, and share with you my experience with developing such applications, and mention key points you need to pay attention to avoid headaches and troubles during development and after your application is running in production.

You are invited! Register for free for the webinar through this link, it will be LIVE, so you also have the opportunity to ask your questions and have it answered right away. I’m waiting you there.

Business photo created by kstudio –

A Windows-native, tristate checkbox TTreeView control

Recently I had to create a Delphi VCL form with a tree-like control. It should be a piece-of-cake with Delphi: just dropped a TTreeView control on the form and I was almost there. But there was one gotcha: I wanted to have checkboxes in each node. Worse: checkboxes that could hold three different states (checked, unchecked, partial).

It’s very rare that I have to build complex GUI applications (lucky me) thus I had hope that in most recent VCL all I had to use was to enable some kind of property in the TTreeView component. To my disappointment, there is not such support for checkbox in the tree view.

Since I didn’t want to use a 3rd party control in this project, I had to find a way to do that manually. I googled for it, and all I could find was the same old way of solving things: create images for the checkbox states, and use StateIndex property to read/write the checkbox state. I just couldn’t believe this is still the way to do it in 2021.

After more research I found out that Windows Vista and on (sorry, XP folks, we have to move forward eventually) provides an extended style to the native tree view control that allows tristate checkboxes. That’s exactly what I wanted. I was really annoyed that I’d had to “create” (find somewhere) images for checked state, unchecked state, partial state… Worse, I’d have to find normal-dpi and high-dpi versions. Using a Windows-native tristate checkbox was the way to go for me.

Enabling tristate checkboxes is as simple as using one line of code to modify the extended style of the Windows control, adding the TVS_EX_PARTIALCHECKBOXES style:


But since I needed more of that – inspect and modify the checkbox state, automatically select child nodes, etc., I wrapped everything it under a TTreeView helper and made it available in the CheckTreeView GitHub repository.

Another advantage is that this not a different component, so no need to install any package and use a different class at runtime. Just use the regular TTreeView component.

Enabling tristate checkboxes

Since it’s just a class helper, to use it, just add the unit CheckTreeView to your form unit:

  uses {...}, CheckTreeView;

Then just call EnableTristateCheckboxes method:


This is enough to add checkboxes to all nodes, and change the node check state when users clicks the checkbox or press space key when a node is selected.

Reading or changing the check state of a node

You can use CheckState property of a node to read or modify the state of the checkbox:

  Node := TreeView1.Selected;
  case Node.CheckState of
    csUnchecked: Node.Text := 'unchecked';
    csChecked: Node.Text := 'checked';
    csPartial: Node.Text := 'partial';
  TreeView1.Selected.CheckState := csChecked;

Automatic check states

If you want the TTreeView control to perform these operations automatically when the user checks/unchecked a node:

  1. Check/uncheck all child items of the modified node;
  2. Update the parent node check state based on the check state of the child nodes;
  3. Allow just check/uncheck state for the modified node.

Then add OnMouseDown and OnKeyPress event handlers for your TTreeView component and add the following code:

procedure TForm6.TreeView1KeyPress(Sender: TObject; var Key: Char);

procedure TForm6.TreeView1MouseDown(Sender: TObject; Button: TMouseButton; Shift: TShiftState; X, Y: Integer);
  TreeView1.HandleMouseDown(Button, Shift, X, Y);

That’s it, I hope you enjoy it. Grab it from GitHub: and let me know what you think by adding your comment below!

20 Years of Delphi and TMS in ERP software: an interview

We have recently published in our Youtube channel (subscribe!) an interview with Alexandre Henzen, Technical Director of Viasoft Korp. The interview (in Portuguese) is available through this link, and in this article right below as well.

In this interview, Alexandre talks about Viasoft Korp, how the company started with just one person and a desktop software built with C++Builder 6 and became an ERP software provider to big Brazilian industries, being part of conglomerate of companies with more than 500 employees.

Among those who took part of this journey, there are me, Wagner Landgraf, TMS Software, and Embarcadero, with Delphi, a development tool used by the company for almost 20 years. This interview is about such journey.

For those who can’t understand Portuguese, or just don’t like videos, we have transcribed below, in English, the best moments of the interview.

1:46 – About Viasoft Korp

Wagner Landgraf: What is Viasoft Korp?

Alexandre Henzen: Viasoft Korp is a business unit of Viasoft group. Viasoft is a company that provides ERP software for several different types of business: agribusiness, supermarkets, construction material, among others. And Viasoft Korp provides ERP software for industries. Korp started officially in 2000. (…) At that time software was developed in C++ Builder.

Viasoft Korp provides ERP software for industries.

Alexandre Henzen

3:00 – Migrating from C++ Builder to Delphi

WL: I think it was one of the few ERP software I’ve seen that was built with C++.

AH: Indeed. At some point, around 2004, 2005, it took 6 hours to compile. We then developed a C++ to Pascal transpiler – with your help by the way, I’m not sure if you remember – so we could migrate the full source code to Delphi.

We developed a C++ to Pascal transpiler to migrate from C++ Builder to Delphi.

Alexandre Henzen

6:47 – Key moments that helped the company

WL: What helped Korp to grow? What were the key moments?

AH: TMS Scripter (TMS Software product for editing and executing scripts at runtime) was a big differential for us. (…) It’s typical that ERP must be customizable, thus such flexibility allowed by TMS Scripter, (it’s something that) in 2001, 2002, you didn’t see many things like that. (…) The customers themselves could create forms, even full modules inside the system. That helped us to grow.

Runtime software customization using TMS Scripter helped us to grow.

Alexandre Henzen

9:17 – Using workflow tool

AH: (Another important decision): At a time BPM was not very popular, (to use TMS Workflow in 2007) was also a big plus. The fact you could draw a flowchart and say: “Joe approves the invoice, if the invoice value is higher than X, send it to Jack, etc.” (…) That made our system even more flexible.

17:20 – The origin of TMS Aurelius

WL: Well, we have been friends and technical partners all these years, we (at TMS) have helped you a lot, you have helped us a lot. We helped you as you said: Korp had technical needs and we developed and improved solutions to give ERP flexibility, I believe you have reduced support a lot. But one thing I never said in public, I will talk about it here for the first time: if you (who are watching us) use TMS Aurelius (Delphi ORM framework from TMS Software), be grateful to this person I’m talking to: he he was not only the first customer of the TMS Aurelius, he was the driving force that led TMS Aurelius to be developed. We are in 2021 and there are many people who still do not know why or how to use an ORM in their software. Alexandre, in 2010, already had such vision, that an ORM would help him and his company. He contacted us and we partnered to develop TMS Aurelius, as the existing ORM libraries for Delphi did not fully serve him.

Alexandre was the driving force that led TMS Aurelius to be developed.

Wagner Landgraf

20:43 – Scalability and technologies

AH: All these developments that we seek, have always been thinking about scalability. Always wanting to expand, improve code, the ERP was always growing very, very complex. So all of these technologies came to make the system flexible and scalable. The system is huge. Today we don’t just use Delphi.

WL: Yes, let’s talk about the other technologies used by Viasoft Korp. The software started as a client/server Windows desktop application. Today, of course, you have many other services, web applications, mobile application, integrations, microservices. What other technologies are also helping Korp today?

AH: For web development, for example, we chose to use C # on the backend and Angular on the frontend. We also use Golang in some microservices. Each language has its purpose.

WL: And you also have take into account the current needs of the company. One might ask, for example: “why didn’t you use TMS Web Core (TMS Software product to create web applications with Delphi)?”. Simply because TMS Web Core didn’t exist at that time! When it was first released, Korp already had all his web applications fully developed in Angular.

AH: Exactly. And we also wanted to run on Linux, Docker, lots of things to take into account.

WL: Yes, all of these tools as well. I learn a lot from Korp when I go there. Not only about programming, but also devops. They are always dealing with Kubernetes, Docker, Consul, Traefik …

AH: About the tests we developed an internal framework, named Flow, that was a big change for us. With such tool we write the BDD code and it executes everything. Currently we have around twelve virtual machines in three different servers running tests 24/7.

We also use C#, Angular, Golang. Each programming language has its purpose.

Alexandre Henzen

23:44 – Tests and software quality

WL: You mentioned how TMS Software contributed with ORM (TMS Aurelius), multitier REST (TMS XData), etc. But speaking about tests: I remember how, many years ago, you (and everyone I knew at the time), suffered from tests and to keep software quality. You had people just to do manual tests, UI tests, etc. I believe that these technical improvements over time helped a lot in this.

AH: Yes, ERP is a very complex software. Without these new development paradigms, this would not be possible. Today we have servers running tests 24 hours a day, using continuous integration. We use Jenkins, it retrieves changes from Bitbucket (Git repository for source code version control), immediately runs all test scenarios, all 100% automated.

WL: I believe then that TMS helped you, a little bit, isn’t it?

AH: Absolutely, all the architecture of our Delphi-made software is built around TMS Business.

All the architecture of our software written in Delphi is built around TMS Business.

Alexandre Henzen

27:40 – Size of customers

WL: You mentioned that XData services are processing a large number of requests, please tell us more about the size of your customers.

AH: There are the most varied sizes. Companies range from 20 to 500 users accessing the system simultaneously. And those are companies with a high volume of logistics handling and issuing fiscal notes (Brazilian legal invoices), they are complex and heavy systems. The amount of information that travels through the system is huge.

28:50 – About recent Delphi versions

WL: Speaking about Delphi. You were using Berlin (10.1), how was this evolution?

AH: We were using Delphi Berlin (10.1) and tried to update to new versions. (The problem is that) our application works like this: it is not just a single executable. There is the main executable and each module in the system is a runtime package, a BPL, (they are modules) that are loaded dynamically as the user keeps using the software. So it’s a huge package structure, and we’ve always suffered from it (runtime packages) in several ways: detecting memory leaks is more complicated, recompiling packages is cumbersome because of package dependencies, etc.

WL: There was even a problem with a Windows update, not related to Delphi, that cause you a big problem, right?

AH: Yes, there was a Windows update that simply screw everything up (Alexandre is referring to this problem reported in Marco Cantu’s blog). It took almost five minutes to just start the application. We even went so far as to revert to previous Windows version and block Windows updates on all the company’s machines. (…) Then, we migrated to Delphi 10.4 Sydney, the first version (10.4.0). At the beginning we still had problems with the tool, the LSP (Language Server Protocol, Delphi’s new system for code completion) itself had some problems. And now with (the update to) 10.4.2, for which we recently upgraded, we felt the difference – it’s much more stable, the IDE is compiling much faster and the feedback I’m receiving from the developers is very positive.

Since XE2, the most stable release that I’ve seen, of all of them, is this one, Sydney 10.4.2.

Alexandre Henzen

31:39 – Upgrading to Delphi 10.4.2 Sydney

WL: So, this 10.4.2 release, compared to 10.4, is much better?

AH: Yes, much more stable, not even close. If we take all the Delphi versions that we’ve used all those years, since Delphi XE2, the most stable released I’ve seen, of all of them, is this one, 10.4.2.

WL: I remember that a big problem you had was the compilation time. The time to run the tests, for example, how long did it take?

AH: It took 58 minutes, almost an hour. It dropped to about 28 minutes with just the upgrade to 10.4.2. Then we made a few more changes to the package settings, and it dropped to 12 to 14 minutes, depending on the machine. The runtime packages feature is also much more stable now, it is a big difference.

Compilation time was 58 minutes, it dropped to 12 minutes.

Alexandre Henzen

38:10 – Closure and contacts

WL: Alexandre, thank you very much for being available for this interview.

AH: Thank you, I am available for anyone who wants to contact me, just go to (my profile on) LinkedIn, search for Alexandre Henzen (link here) and we can exchange ideas, I always like to discuss new technologies.


Did you like this interview? What to share your experiences using Delphi, TMS Software products, how your company is going? Leave your comment!

(*) Photo by krakenimages on Unsplash

Two webinars in two days: what’s happening in the Delphi world!

Amazing things are happening in the Delphi world! Amazing things are going to happen in the Delphi world. This week, two different, important webinars are going to announce great things coming in the Delphi world!

GraphQL for Delphi, BPM Workflow, and more: what’s coming for Delphi in 2021:

TMS Software is going to host a free, live, interactive webinar to show some recent technologies, features and products being developed to be released in 2021, many of them in a few weeks or months. GraphQL for Delphi, BPM Workflow, multitenancy, authorization and authentication, user management, and more!

The webinar happens:

Tuesday, Feb 23, 2021, 3:00 PM – 4:00 PM GMT

Follow this link for more information and to register to attend for free!

What’s Coming in Delphi, C++Builder, and RAD Studio 10.4.2 Sydney

Find out what is coming in the next major release 10.4.2 Sydney of your favorite developer tool: RAD Studio, Delphi, & C++Builder. This is your opportunity to join product management to understand how your productivity will be improved.

The What’s Coming in Delphi, C++Builder, and RAD Studio 10.4.2 Sydney webinar is offered at 3 times so you can find the one that best suites your schedule.

Wed, Feb 24, 2021 3:00 PM – 4:00 PM GMT
Wed, Feb 24, 2021 6:00 PM – 7:00 PM GMT
Thu, Feb 25, 2021 1:00 AM – 2:00 AM GMT

Register for free for both webinars and learn all the exciting next things coming to Delphi world!

(Photo by Andrea Piacquadio from Pexels)

A new Embarcadero MVP has arrived!

Embarcadero has been running, for quite some time already, an MVP (Most Valuable Professional) program, which, according to the official web site, “chooses the “best of the best” Embarcadero community members to be trusted assets for our customers and prospects”.

In other words, Embarcadero MVPs are professionals who are well regarded by being technically skilled in Delphi (and other Embarcadero products) and helping in its evangelization, by writing articles, speaking at conferences, webinars, among others.

I’m honored to say that I’ve been recently regarded as an Embarcadero MVP and joined the team – the MVP directory now includes Wagner Landgraf as an MVP. I’ve also been kindly mentioned by Darian Miller in his post where he announces that he’s also become an Embarcadero MVP (his post has interesting and detailed information about the Embarcadero MVP program, check it if you want to know more).

Well, I just wanted to share with you this great news! I hope Delphi community continues to increase and gets stronger, and I will keep doing my part for that to happen. Thank you for your support and feel free to leave your comment below.

* Photo by Laula Co on Unsplash

Top content and free prizes at DelphiCon 2020

In this unusual year, Embarcadero is bringing us the DelphiCon 2020 Worldwide event: the official online conference about Delphi. It’s a free, 3-day online event, which will happen on November 17h through 19th with extraordinary content and expert speakers.

We at are happy to offer to all DelphiCon 2020 attendees 30% discount on our training courses Introduction to TMS Web Core and TMS Business Masterclass. In addition to that, we will also offer 3 (three) free enrollments to our Introduction to TMS Web Core training course for winners to be selected during the event!

You will attend sessions from a stellar team of speakers like Marco Cantu, David Millington, Bruno Fierens, Dr. Holger Flick, Bob Swart, Andrea Magni, Nick Hodges, Stefan Glienke, Primož Gabrijelcic, Ray Konopka, Chad Hower, Cary Jensen, Alister Christie and Daniele Teti.

Don’t miss this opportunity and save your free seat at DelphiCon 2020 official site!

GraphQL from the perspective of a Delphi developer

Photo by Timothe Courtier on Unsplash

By now you probably know about GraphQL, or at least you might have read or heard that name somewhere. Wikipedia says that “GraphQL is an open-source data query and manipulation language for APIs, and a runtime for fulfilling queries with existing data”. If that doesn’t say much to you, or if you have read about GraphQL before but didn’t understand very well what it is, then this article is for you. Its purpose is to try – again – to explain what GraphQL is, its advantages and disadvantages, from the perspective of a Delphi (and also TMS XData) developer.

What is GraphQL?

There are plenty of articles explaining what GraphQL is. The official site, the Wikipedia article, tutorials here and there. So I will try to not be repetitive here and explain it my way, converting the points I was most confused of when I was learning about it. First of all:

  • GraphQL is NOT a framework or programming library;
  • GraphQL is NOT a server application;
  • GraphQL is NOT a database server or wrapper.

When I first started learning about it, this got me confused as well. What is GraphQL then?

  • GraphQL is a SPECIFICATION for a language and a runtime (available here).

In other words, GraphQL strictly specifies a query language and how runtime libraries should “execute” that language. Using a very rough analogy, it’s like a strict specification of SQL, explaining how SQL statements should be parsed and validated, and providing algorithms defining how database servers should execute those SQL and return data.

Of course, then you have several GraphQL implementations, which are indeed libraries in several different languages like Java, .NET, JavaScript, that allow you to create, for example, servers that, through HTTP, allow clients to send such queries and get the response of such query execution in JSON format. But note that even that HTTP communication is not strictly specified. It’s just one of many ways to “execute” a GraphQL document (a text file containing a query in valid GraphQL language).

GraphQL is “strongly typed”

This is one key point of GraphQL. Whatever you are going to do with it, you always start from a “schema”, which specifies all the types available in the language. The following is an example of a GraphQL schema:

type Query {
  human(id: Int): Human

type Human  {
  id: Int!
  name: String!
  homePlanet: String
  height: Float
  mass: Float

The above schema specifies two object types: Query and Human, the fields in each object type, and the type returned for each field. Fields can also have arguments (parameters), also with its specified type (human field in Query type has the parameter id of type Int).

If you are implementing a server that supports GraphQL, you have to define a schema first. If you are writing a client that sends GraphQL queries to a server, those queries must comply with the specified schema. You cannot create a query asking for a type that doesn’t exist. Or a field that doesn’t exist. Or even using a field parameter (argument) of a different data type than the specified.

It’s interesting to note that in a world that dynamic languages are so popular today (JavaScript being the main one), one of the key claims about GraphQL advantages is that it’s strongly typed – something I personally like, as a Delphi developer.

GraphQL documents (queries)

Once the GraphQL schema is defined, clients can write GraphQL documents (queries). In summary, a GraphQL query just lists the fields that we want the values for, and if such fields return objects, the fields of those objects, and on and on at any depth level you want. For example:

    human(id: 1000)  {

The query above asks for the value of human field of type Query (implicit by default because of its name), passing the value 1000 for the id parameter. Since human field returns an object of type Human, the query also asks for just fields name and height for such object.

If such query is sent via HTTP to a server implementing GraphQL, the server will execute the query and return its results. A possible result could be something like this:

    "data": {
        "human": {
            "name": "Luke Skywalker",
            "height": 1.72

You get what you asked for: the result of human field, and the name and height fields of the Human object. It’s also important to explain two things:

  1. The GraphQL query is NOT a JSON document, even though it looks like one. It’s a GraphQL query, with its specific syntax as defined by the specification.
  2. The result data IS a JSON document, and the way it is formatted is NOT in the specification. It could be a XML document or whatever other format.

GraphQL and HTTP

GraphQL specification doesn’t say anything about HTTP. That really got me confused at the beginning, because I was reading all around about how GraphQL compares to REST, how it’s a new way to build APIs, etc. But the thing is GraphQL is not about HTTP, but only about the query language and how it should return results, in any format.

But of course, HTTP became the main way of executing GraphQL queries, and JSON end up being the most used format to return the results of such execution. The closest to a standard we have are some guidelines about how to serve GraphQL over HTTP, in the official GraphQL site.

In summary, there are no multiple endpoints: GraphQL queries are sent to a single URL endpoint, either directly in the URL in GET requests:


or in a JSON document, in POST requests:

POST /graphql HTTP/1.1
Host: myapi

  "query": "{{human(id:1000){name,height}}"

And the server response is usually a JSON document, as we saw above. It’s also interesting to note that if the GraphQL query is accepted, it always return an HTTP status code 200, even if the query execution fails. If there are errors, they appear inside the JSON document (error format is included in GraphQL specification):

  "errors": [
      "message": "Cannot query field \"homePlanet\" on type \"Human\"."

Why GraphQL?

Now that we have quickly covered what GraphQL is, it’s time to learn why it’s beginning to be so widely used.

There are several articles mentioning GraphQL advantages and disadvantages compared to REST, explaining why it’s taking over APIs, directly comparing GraphQL and REST, listing reasons why you should consider a GraphQL server and even how to convince your boss to use GraphQL (despite the awful name – in my opinion – this last one is a nice article about GraphQL).

I will try to summarize what I think are the key reasons. I will also try to compare it with REST and specially with TMS XData – a framework to build REST API servers using Delphi.

Clients choose what to fetch

In REST, if you want to retrieve an invoice, you will do something like this:

GET /invoice/10

  "id": 10,
  "total": 152

Then, if you want to retrieve the customer of the invoice, you need another request:

GET /invoice/10/customer

  "id": 115,
  "name": "Joe Doe"

Of course, your REST server could return customer data inline in the first request, when invoice is requested. But then, what if the client does not want customer data? The server will return more data than the client needs. In GraphQL you write the query asking just the data you want. Either without customer data:

  invoice(10)  {

or with customer data:

  invoice(10) {
    customer {

And you don’t have to modify your server for that, or create multiple endpoints. Developing client applications with GraphQL, specially mobile and web applications, is a breeze, and performs very well – in a single request you get all the data you need, and only the data you need.

On the other hand, your REST server could implement mechanisms to allow your client to better choose what it needs. That’s what XData does, for example, with the $expand query filter mechanism. When an invoice is request from the client, it comes with minimum customer information (just the id):

GET /invoice/10

  "id": 10,
  "total": 152,
  "[email protected]": "customer(115)"

but if the client wants full customer information, he can just ask for it using $expand:

GET /invoice/10?$expand=customer

  "id": 10,
  "total": 152,
  "customer": {
    "id": 115,
    "name": "John Doe"

When I first learned about GraphQL, my thought was: this might be an advantage in some situations, but if you are using a XData REST server, it’s not a big difference. With XData, there are ways to get all related (associated) information in one single server request (roundtrip).

Schema introspection (Self-documentation)

As I’ve mentioned above, GraphQL is based on a schema which holds information about all types, fields and other elements needed to describe and validate a GraphQL query. Not only that, GraphQL itself provides a way for clients to retrieve information about such schema, a process called introspection. For example, this query will ask for all fields of the type Human, with their respective types:

  __type(name: "Human") {
    fields {
      type {

The query would return something like this:

  "data": {
    "__type": {
      "name": “Human”,
      "fields": [
          "name": "id",
          "type": {
            "name": null,
            "kind": "NON_NULL"
          "name": "name",
          "type": {
            "name": null,
            "kind": "NON_NULL"
          "name": “homePlanet”,
          "type": {
            "name": "String",
            "kind": "SCALAR"
          "name": “height,
          "type": {
            "name": “Float,
            "kind": "SCALAR"
          "name": “mass”,
          "type": {
            "name": “Float”,
            "kind": "SCALAR"

This brings a lot of benefits. First, all clients know in advance what information they can retrieve. That also means that a GraphQL server is self-documented, since the schema already describes what it can offer.

For a client to use a REST server, for example it necessarily needs to go to some kind of documentation, because the client simply don’t know that an invoice resource is available at some endpoint /invoice/:id, or what is the type of the id, or any other ways to create an invoice, different ways to query, etc. That’s not the case with GraphQL: everything is there.

Also, this makes it possible to create lots of tools around GraphQL. I will mention it later in this article.

Again, REST servers also have a nice way to provide meta information about it: Swagger. The problem, of course, is that not all REST servers implement Swagger. Also, some REST implementations require you to do lots of extra coding to properly build the Swagger file, like adding lots of attributes just to flag each endpoint, which parameters are available in each endpoint, etc.

Luckily, TMS XData does support Swagger beautifully, and in a very automatic way. Basically, after you created your REST API with XData, your Swagger document is provided automatically, because it has all metadata information from the interfaces declared as endpoints, thanks to the way XData is built. So the “introspection” of a XData REST server can be enabled with just a few lines to code, and available at a single endpoint:


Again, this is a nice GraphQL feature, but with proper tool or effort, REST servers can also provide such mechanism.


From GraphQL website: Why do most APIs version? When there’s limited control over the data that’s returned from an API endpoint, any change can be considered a breaking change, and breaking changes require a new version. If adding new features to an API requires a new version, then a tradeoff emerges between releasing often and having many incremental versions versus the understandability and maintainability of the API.

In other words: it’s easier to expand and evolve a GraphQL API because clients never receive information that they didn’t ask for. Adding a new field to Human type, for example, will be harmless because existing clients will never receive that new field, because they never asked for it. On the other hand, adding a new field to a REST endpoint will result in a different response for existing clients (the new added field).

Thus, GraphQL makes it easier to evolve your API and, in theory – in most cases, not all – you don’t need to version your API.

Tools and libraries

In my opinion, this is where GraphQL really shines, and which justifies using it. As you saw, when you use nice libraries for building REST servers, like TMS XData, you kind of minimize the advantages of GraphQL over REST. The limitations of REST are known, and a good framework, over time, will add features to solve the problems. That’s what XData does.

But when you have an ecosystem, it’s a different thing. You can’t control or define everything that will be built around a tool or a specification. Since the release of GraphQL, lots of tools and libraries have been built, and once you have your GraphQL API, you can benefit from all of them.

This nice article describes 10 awesome tools and extensions for GraphQL APIs, and I will explicitly mention some of them here.

GraphiQL and/or GraphQL Playground

Simply put, GraphiQL and GraphQL Playground are a “GraphQL IDE”. From there you can write your queries with full code completion (GraphQL has a schema, remember?), syntax check, detailed error messages and positions, among others. It really makes it easy to write and run GraphQL queries.

GraphQL Voyager

GraphQL voyager is a tool to represent any GraphQL API as an interactive graph. Another tool that takes advantage of the GraphQL schema introspection. You quickly see all your schema in a graphical way, and how each field and type relates to each other. It has a nice live demo you can try.

Browser extensions

There are several GraphQL extensions to your favorite browser that helps you out when building web applications that connect to GraphQL. Both Chrome GraphQL Network and Firefox GraphQL DevTools extensions are valuable tools that allows you to debug, inspect requests, track errors and increase your productivity when working with GraphQL. They are just honorable mentions, of course there are much more extensions for different purposes that you can use.

Clients all around

This deserves a separated article, but of course it’s worth mention that libraries for GraphQL clients are all around. If you want to build web or mobile applications, there are clients for React, VueJs, Angular, Swift, Kotlin, Flutter and many other development tools and libraries.

Final notes

I didn’t name this section “Conclusion” because there is nothing to conclude. This article also didn’t have an intention to compare GraphQL with anything – even when I do comparisons with REST here and there.

The idea was to introduce this technology to newcomers, and also for those who already knew about it, talk about it from the perspective of a Delphi developer, hoping that I had clarified some points that might be obscure for those used to Delphi – I tried to focused exactly on the points that got me confused when I started learning about GraphQL.

What about you? Do you have any experience with GraphQL? Are you using it in production? What do you think about it? Are you a newcomer and want to ask something or share your doubts? Please leave your comment following the link below and let’s discuss about it!

Catching memory leaks in Delphi: a definitive list of tools

Photo by Hunter Haley on Unsplash

Detecting memory leaks is a very important task in Delphi development. I’ve written about it before, stressing that in server applications like the ones built with TMS XData this is even more important.

With the upcoming release of Delphi 10.4 version, this will become even more relevant. A unified memory management has been promised since last year, and it looks it has arrived.

As everything in life, this change isn’t 100% good or bad, there are pros and cons in it, but one thing is clear: the way memory is managed now is the same in all platforms, so the ways to detect memory leaks in different platforms are more similar now. I personally think this is a good thing. It’s also important to note that this doesn’t necessarily mean “more leaks” in mobile platforms. The “old” (still valid when I’m writing this article) ARC mechanism also had its problems, and in my opinion harder to detect, like dealing with cyclical references.

But well, enough with this too long introduction. The humble purpose of this article is to be a definitive and up-to-date list of all tools that you can use to detect memory leaks in Delphi applications. Pointing out that memory management is unified just means that these tools are even more relevant than ever. Detecting and fixing a memory leak in Windows will help even more that your non-Windows applications will not have memory leaks as well.

So, to the list!

FastMM (Free)

FastMM (or to be more specific, FastMM4) is the de-facto standard tool to detect memory leaks in Delphi. The reason is simple: it’s the default memory manager in Delphi, so it’s already built-in and ready to use.

Since FastMM allocates and deallocates memory in your application, who else is better to report the blocks that have not been deallocated? All that it takes to start working with it is add a single line in your project:

ReportMemoryLeaksOnShutdown := True;

And voilà, your application will report all memory leaks when it shuts down. If there are leaks at the end of your application, a dialog will be displayed showing all the leaks.

All Delphi developers should add that line to their applications. Really. I don’t even know why this is not added by default in Delphi, at least wrapped by a {$IFDEF DEBUG} directive. Maybe for historical reasons.

So, if it’s built-in, default, works, why don’t we finish this article here, then? Well, there are some gotchas.

Delphi includes a stripped out version of FastMM4. It doesn’t have all the nice debugging tools you need (to know, for example, where in the code your leaked memory was allocated). To do that, you have to use the full FastMM4 version, available on the FASTMM4 public GitHub repository.

You also have to use a DLL for the full debugging features; it’s not cross-platform: it only officially works for Windows (it looks like a macOS version is available in the official repo, but I never tried it). And even though it has lots of features, to use it you have to deal with .INC files and manual configurations to make it work, which might not be comfortable for some users.

But all in all, is a great tool, the “standard” tool to get memory leaks in Delphi. (Side note: FASTMM5 has just been released. We haven’t tested it yet but it looks like it brings a great improvement in performance for multithreaded applications, we can’t wait to test it in TMS XData.)


  • Free;
  • Full source code;
  • Built-in in Delphi;
  • Easy to setup;
  • Lots of advanced features.


  • Windows only;
  • Needs external DLL for debugging features;
  • Not user-friendly to setup and use advanced features (no built-in GUI);
  • Only reports leaks on memory allocated by FASTMM itself.

LeakCheck (Free)

Delphi LeakCheck is another great option for detecting memory leaks. It’s also free, open source, and has some advantages over FastMM: it’s cross-platform, meaning you can check leaks directly in mobile and Linux applications; and it integrates very well with unit test frameworks (namely DUnit and DUnitX).

The way to get started with it is similar to FastMM: add LeakCheck unit as the first used unit in your dpr uses clause, and it will plugin and be ready to use. Setting up for unit testing is a little more complicated though, but that’s part of the game.

One small disadvantage is that to use it you are almost on your own: the project hasn’t received updates for a while (which isn’t necessarily bad since it’s working). But that means probably you won’t get much help directly from the author (I never tried, to be fair). There is also not much information about it in the web, I just found one single article that explains how to use it besides the detailed description in the official Bitbucket repository itself.


  • Free;
  • Full source code;
  • Cross-plataform;
  • Integrates well with unit testing (DUnit and DUnitX).


  • Not much information around about how to use it;
  • No recent updates, no official support;

Deleaker (Commercial)

Delaker is the only commercial tool in the list that is exclusively dedicated to catch memory leaks. That reflects in the product, which provides you with really nice features for detecting memory leaks.

Differently from the previous two tools, and similar to the following ones, it has a friendly GUI for you to setup the environment and see the results, which can be used standalone or integrated to Delphi IDE. It can also detect way more types of memory leaks: GDI leaks, leaks of Windows USER objects and handles, leaks in Windows APIs, in 3rd party DLLs, etc. Exactly because of that, it provides options for you to easily ignore several types of leaks – if you don’t, you will get a gazillion of leaks in just a regular app execution.

Another nice feature is the ability to take snapshots of the memory allocation. This allows you to detect leaks not only during the lifetime of your whole application, but in some specific operations of it.


  • Friendly GUI that can be used standalone or integrated into Delphi IDE;
  • Detects all types of leaks;
  • Command-line tool for CI integration;
  • Memory usage snapshots;
  • Official support;


  • Paid license ($99 for Home License, $399 for Single Developer License);
  • Windows only;

EurekaLog (Commercial)

EurekaLog is an old player. It’s been around for decades – I couldn’t find any info on their web site about when first version was released, the oldest information is for EurekaLog 4.0 which was released in 2002, simply 18 years ago.

It’s not a tool dedicated exclusively to catch memory leaks. Instead, it has a full range of features, memory leak detection being just a “side” feature. The purpose of EurekaLog is to detect any problems in your application – exceptions, leaks, etc. – at the customer side, and report it back to you.

Thus, it’s a great tool to help you improve the quality of your software and provide good support to your customers, since you will get error and leak reports from all your customers, in different environments, doing different operations. It also helps you to find tricky bugs that only happen at customer side (you know it well, those “cannot reproduce it here” situations).


  • Detects both memory and resource leaks;
  • Leaks and errors detected at customer side can be sent automatically to you;
  • Lots of other features: bug reporting, integration with bug tracking systems, among others;
  • Official support;


  • Paid license ($149 for Professional License, $249 for Enterprise License);
  • Windows only;
  • Not many advanced features for memory leak detection;

madExcept (Commercial)

I use to say that MadExcept is the “cousin” of EurekaLog. They are both available for about the same time (around 20 years or more). They share similar features. They have more or less the same purpose. And so on.

And, funny enough, there isn’t a “winner”. If you look around the web about comparisons between the two, you will never come to a conclusion on which is “better”. Customers of both products are usually satisfied, and also usually they can’t comment on the competitor because they never used it. That’s my case, actually. I’m a happy EurekaLog customer (although I don’t use it to catch memory leaks), and I never used madExcept. But it could simply be the opposite. I believe I could be well served with madExcept as well.

Thus, I considered madExcept pros and cons equal to EurekaLog’s. Maybe the only visible different is that while madExcept is cheaper (there is even a free version for non-commercial use), EurekaLog seems to be much more active and frequently updated.


  • Free for non-commercial support;
  • Detects both memory and resource leaks;
  • Leaks and errors detected at customer side can be sent automatically to you;
  • Lots of other features: bug reporting, integration with bug tracking systems, among others;
  • Official support;


  • Paid license (€ 159 for full source license);
  • Windows only;
  • Not many advanced features for memory leak detection;

AQTime Pro (Commercial)

AQTime is a top-notch tool to make your code better. It’s really high standard, offering not only an advanced memory leak detection tool (with a nice GUI, snapshots, memory tracking, resource leaks detection, among others) but also performance profiling (with both instrumenting and sampling profilers), code coverage, code analysis, among others.

It’s really awesome tool, but it has its downsides: it’s pretty expensive, and it looks in “maintenance” mode – it receives updates more or less once a year, and the news are mostly “support for the new Delphi version”. Handful bug fixes over the years, and virtually no new features. But well, it still has no equivalent tool in Delphi world when it comes to many and powerful features.


  • Detects memory, resource, GDI, handle leaks among others;
  • Real-time allocation monitor;
  • Snapshots;
  • Lots of other tools in the bundle (performance profiler, code coverage, etc.).


  • Pretty expensive ($719 for a node-locked license, $2279 for node-floating license);
  • Windows only;

Nexus Quality Suite (Commercial)

In a similar way of EurekaLog and madExcept, I believe Nexus Quality Suite is somewhat related to AQTime. Both provide lots of tools to improve the quality of your software, and there are intersections between them.

Nexus Quality suite provides memory and resource leak detectors, but also performance profilers, line timers, code coverage, and even a GUI automatic tester, among other things.

I haven’t tried the memory check tool myself, thus the pros and cons will be just based on what I see from web site:


  • Detects memory and resource leaks;
  • Official support, active support forums;
  • Lots of other tools in the package


  • Paid license (AUD 490, around $300);
  • Windows only.

DDDebug (Commercial)

From their website: DDDebug is a collection of debugging tools which contains several modules: a memory profiler, a thread viewer, a module viewer and an enhanced exception handler.

One interesting thing I noted from DDDebug is that it has a slightly different approach, by providing memory usage and statistics instantly from inside the application. I didn’t use it but it looks like it makes it easier to find bugs in the app, by interacting with it at the same time you analyze it.

It also works with packages which is a plus, provides more functions besides just memory leak detection, and even though it’s commercial, the license price is really accessible.


  • Provides results directly inside your application, from a GUI;
  • Supports packages;
  • Official support;
  • Accessible license price (from €59).


  • Windows only;

Spider (Free)

Spider websites lists a lot of interesting features: analysis of exceptions, analysis of the realtime use of memory, analysis of memory leaks, analysis of the call stack, among others.

The thing is, from the list, is a tool I just tried once, and I got confused with the user interface and the results themselves. I couldn’t use it, but maybe that’s just me. So it’s listed here, but I can’t make any fair judgement over it.


  • Free;
  • Source code available;


  • Confusing user interface (personal opinion);

Non-Delphi Tools

In addition to the above tools that are Delphi specific, or at least also cover Delphi specifically – either integrating into source code, or in the IDE, etc. – there are also general-purpose tools for memory leak detection which can be helpful in some situations.

Valgrind (Free)

Valgrind is an instrumentation framework. It provides many tools in it, and you can add your own. One of the tools is memcheck, which will help you to get leaks in your application.

I use it myself a lot to detect memory leaks in Linux applications, and it’s actually very simple to use: just execute valgrind and pass the application you want to test in the command-line parameter. Valgrind will launch the application and at the end of it will give you a report with detailed information including possible leaks. There are of course many command-line options you can use to log to a file, choose the size of call stack, detection level, among others.

Instruments (Free)

Apple Instruments is a powerful and flexible performance-analysis and testing tool that’s part of the Xcode tool set. Among other things, it can be used to detect leak in iOS and macOS applications. Adrian Gallero has made a nice article at TMS Software blog about how to use Instruments to detect leaks in iOS applications. It’s a little bit old post already, but I believe it’s still valid.


There is no winner. Each tool has their own pros and cons, and it’s interesting to note that the they are not mutually exclusive. Actually, I use several of them myself, for different purposes.

I use FastMM for my “daily” memory leak detection, LeakCheck in unit tests, Deleaker when I want to check for other types of leaks and use snapshots, EurekaLog for bug reporting in my end-user applications, AQTime for performance profiling and Valgrind for detecting leaks in Linux. As you can see, all of them are useful!

The important thing is: don’t let your application leak memory! If you are just starting with this subject, you know what to have to do now:

ReportMemoryLeaksOnShutdown := True;

Add the above line to your application and start catching leaks!

(Do you know of any other tool that should be on the list? Do you have a different opinion on the listed tools? Please comment below and share your knowledge to help make this list the definitive one. I will update it frequently as I receive new info.)

An interesting discussion about data replication with TMS Echo

This one is another one of many interesting discussions we had at the TMS Business Masterclass in Düsseldorf. One of then was the funny discussion about class fields having their name prefixed with uppercase “F” or not.

Now this one is a little bit more technical, but also interesting nonetheless. The session was about database data replication using TMS Echo, and the topic was how the changes are send (moved) from one peer (node) to another.

The question raised was about the “server”, or the “controller” which orchestrates all this “moving” of data changes. I replied that there is no “server” (in the sense that there is no central orchestrator, the system is distributed and can operate independent from other nodes), only to contradict myself few minutes later saying that a “server” is needed (but then I – hopefully – explained the contradiction).

A funny and interesting discussion, I hope you enjoy it, and, after you watch the video, I raise the question to you: Is there a server, or not? Leave your comment!

By the way, the complete content of the event is available here: TMS Business Masterclass Online.

TMS Business Masterclass is now online

TMS Business Masterclass Course

In November 2019, TMS Days 2019 took place in both Düsseldorf, Germany and Wevelgem, Belgium. It was the biggest TMS event ever, with three full days in two different cities, and team members from nine different countries around the world.

It was a face-to-face event and even before the event started many people from all over the world approached us asking if they could watch the content online since they could not travel to attend the event in person.

TMS Business Masterclass Wevelgem

Well, now also for the first time, parts of a TMS event is now available to be watched online. You will have the opportunity to watch both TMS Business Masterclass days in full length – the one which took place in Düsseldorf and the one which took place in Wevelgem.

We also tried to provide you with high quality material:

  • Audio from instructor is clear and the screen recordings are there, of course;
  • A second camera showing the instructor is also present to give you a more immersive feeling and get a more personal touch;
TMS Business Masterclass Instructor
  • Questions from the audience was not very audible, but we took the effort to subtitle most of the questions so you could follow and understand everything that was being discussed!
TMS Business Masterclass Subtitles
  • The full recording was fully reviewed and broken into several smaller pieces, so you don’t have a big chunk of 8 hours recording, but very small “lessons” and the subject discussed in each of them. Some are broken to the length of only a few minutes. So you can get really focused and productive on finding content!
TMS Business Masterclass Curriculum

Follow the links below to get more info about the course, pictures of the event, view the full course content structure, watch some preview videos for free, and of course, enroll to the course!

TMS Business Masterclass in Düsseldorf ($49)

TMS Business Masterclass in Wevelgem ($49)

TMS Business Masterclass Bundle ($79)

Enrollment is $79 for both courses, or just $49 for one course. Of course, all the attendants of the event in Düsseldorf or Wevelgem get free access, not only for the attended day, but for both. Another perk for the attendants!

To finish this post, watch this personal-favorite excerpt below. When building Delphi classes, do you prefix field names with an uppercase “F”? 😉