Building a live architecture knowledge base

Each knowledge base about architecture faces the same challenge: find the balance between details and accuracy. The less details would mean simpler maintenance of data while it certainly less usable. If you have limited resources for future maintenance you have to make compromises. This article shows the way you can built up-to-date usable knowledge base with no bad compromises about scope and usability.

The main problems about building and maintaining a good knowledge base are the following:

  • there are so many independent data source, which would be consolidated
  • there is no exact method to ensure if the sources are trustable
  • in large organisation each role will have their own truth
  • last but not least the missing common language.

Facing the problems towards the solutions there are some great challenges to solve but the good news is that the equation has a solution! The following process shows step by step the main stages of the construction.

First of all, you have to define the scope of the knowledge base and based on the scope you need an appropriate data model to store the information. Rather important the right tool to use!

Scoping

I suggest you to start with a the following vision: "My knowledge base will contain everything about my enterprise from the business processes to the physical servers; the services and product dependencies to the deployment details; solution building blocks and architecture related decisions, etc." Luckily this scope in smaller chunks therefore you will build up a roadmap having the critical issues covered first and keep the rest for the future.

In the field of scoping there is a critical decision. Whatever areas of the architecture you will handle by the knowledge base the overall landscape must be covered to avoid having multiple knowledge bases. The typical example is about the applications. Each company has business critical applications and many other "satellites". The typical mistake is that a knowledge base focuses only on the business critical ones and neglects the rest. This will cause the situation that your knowledge base will not be trusted since it contains only the large one and you miss the relation to the real-life issues which happens many cases in the small ones. The business criticality is a very important information on the other hand therefore you have to handle it as an attribute of the applications - never settle for less the vision of having everything in the knowledge base!

Although the perfectness is the target about the scope, it is not possible to cover everything in one step. You have to define clear target and timing as for all projects. If the application information is the most critical thing for you, the knowledge base should focus on logical level application information - but not the real APIs or deployment details. If you miss the service dependencies you are delivering to your customers start collecting service information first and so on.

Data model to use

Past years' experience was guiding me to use two main models. The first is TOGAF. TOGAF does not have detailed data model but contains high level aspects to arrange data and also contains procedures and roles, which should also be represented in an enterprise model.

The other model I trust is Telemanagement Forum SID, which defines a skeleton to cover everything from business offering down to instances running in your architecture. Although SID does not clean up each of the corners it is a great skeleton to have a proper data model for the entire architecture. SID as the skeleton let you speak common language to other also to discuss your own specialities added on the top of SID. Let's take an example! If you know TOGAF - or the everyday IT business - you speak about applications or systems and their functional modules. On the other side SID does not define application or functional module, it speak about Logical Resources letting the field open for you to define, what kind of logical resources you have. The table below shows some object types and their categorisation from TOGAF and SID prospective. The table uses TOGAF Architecture Building Block (ABB) and Solution Building Block (SBB), while on SID side Logical resource (LR) and Physical Resource (PR) as category.

 

Object type Description TOGAF SID
Application Collection of interconnected functional modules implementing well defined business service(s). The business services are exposed to other applications and/or end-users for service consumption. Application itself consequently a logical entity, which delivers an understandable set of business functionalities. ABB LR
Functional module A set of business functions, which strictly relates to each other inside an application. The driver or funcitonal module definition may vary by application but can be the same if you used a well defined method to design you architecture, like Simple Iterative Partitions (SIP) detailed in the Everything you always wanted to know about Complexity and How to build simple architecture? articles. ABB LR
Implementation module A set of already built functionality, which are installed together. In practice it can either an executable file, a library or a folder which handled as an atomic part in the course of deployment. SBB LR
Logical host Represents an executing environment with attributes, which are used to prepare physical instances. You define here an application server, which may have multiple instance all with the same role. ABB LR
Physical host It is nothing else but a server, which plays the role defined by Logical Host and executes the instances of Implementation modules. Be aware that a Physical host may be a virtual server, since physical has different meanings in the field of infrastructure management! SBB PR

Some words about the data management

The Excel nightmare
Did you ever fantasize about a day when you wake up and Excel is missing from the PCs of all business users? That would help recognise immediately all the hidden business needs, which were not ordered for years; why marketing people talk bilge about product statistics; what are the reasons behind the partner settlement is a black magic secret; and why you see the finance people being so scared if an auditor knocks the door!
OK, frankly, we would face the problem that we have to get 4 years to implement everything into the „legal“ applications and another 4 to catch-up the thing came awhile.
This is not perfect either - it would not be nice to dream about, while a like to sleep well; but fantasy is free...

All we know that architecture seems to be nothing but rectangles and lines. But it is much more! Architects collect large amount of data behind "boxes" which is stored in a kind of repository.

I saw many times the mistake about architecture management that the data and drawings are handled as separated entities. The mistake made by those really great architects who never do the same with a business application, where data, access methods, representation layers, constraints and all the other professional staff is designed, implemented and operated properly. We have to focus bit more on our own tool, the architecture repository! Never use those kind of tools, which are Visio-like, defines nice drawings but does not focus on data! Also bad to use the other end, when the repo is Excel-like having great tables with a lot of information but no or poor visualisation on the top of that. Excel and Visio as the other Office tools are great for handling office documents but bad to manage integrated information sets!

Finally you have to have a tool, which handles the right data model (just see below) with all the details and let you define views on the top either to generate drawings, list or documents.

Documents processing

As you defined your scope, model and tool this is the time to start fill-up your repository. The first step is to collect and process available documentation, The bad news is that documents will never be perfect. My experience shows that all the written materials are outdated (usually very outdated) and are focusing on development or operation aspects. Either they are not perfect they will help to get the first impressions about the topics you have to record and architects to start speaking the same language as experts do. Certainly there are refreshing exceptions when documents are so usable, which is simplifying your life: you have to convert it (no question it is a manual task) into the knowledge base.

Personal interviews

After processing all accessible documentation you should organise personal interviews with experts of interest areas, services or applications. The first prerequisite is to have the experts being identified. This stage you may face the issue that the people who shared you documents are not real experts of a given area or would not take the responsibility of standing as an expert. If the problem happens you must not hesitate to escalate to the right leaders in the organisation to get the right resources to go forward. Beyond getting the supportive people this potential escalation situations will help to build your strict position in the organisation and makes an extra opportunity to communicate the values of the architecture knowledge base. Never forget: there are no problem, but opportunities only!

Back to the interviews. You should make allowance for the experts have no experience working with a well-structured data model instead of documents, therefore never make an interview longer then 90 minutes! The topic has to be very focused, again to let experts keep the line with you. Focused topic also means that 1-3 experts are the maximum to invite to avoid having one spokesman and some silent attendees. Last but not least, one interview is never enough! You have to prepare 2-3 events per topic having some days break between them when the experts will review the results of the previous meeting.

The last word is about the place of the interviews. The best thing is if you can get a fix place where all the interviews will run. This could be a small workshop-like room with a large whiteboard and a large screen. Whiteboard is for the quick sketches, the screen is for the knowledge base to use together. It is also a must to have a table where you are sitting together and all participants can use there computers but only to collect more details about open questions!

Result merging

The best way of loading the information into the knowledge base if experienced architects are doing it. That means you have to prepare enough resources in your side to carry out data recording. Some of the data will certainly be recorded in the course of the interviews but significant part is the best to record later sparing time to the interviewees. The reason is to keep the pencil in your hand is, that the structured and consistent info recording requires experience to do either as your model will become more and more complex. The experts you invite may meet the knowledge base at the first time and you can't expect they can make this exercise!

On the other hand all the thing would be reviewed by them using the knowledge base but reading is more simple than writing...

Auto-discoveries

We should touch the area of automated data collection. There are many ways to run auto-discoveries starting from infrastructure discovery tools through source code analysis to clickstream recording. All approaches require to build a new application, which has the function to be built into the deployment procedure, tested well and operated regularly as any other business applications. In some cases I played with them and they helped a lot, while their cost is significant.

Take the example of source code analysis, where source code mean not only the software sources but data definitions and also configuration files. Assuming you will get the changes e.g. from a Continuous Delivery process, the analyser has to be prepared to interpret commonly used programming languages to convert them into a common dependency structure. The next task is to build "dialect" analyser, which interprets the coding and naming conventions of the given application to catch interfaces, business entry points, shared data objects and so. The last challenge is the merge the recognised changes into the repository using its mass loading interface (assuming it has).

The summary above highlights the main tasks, but you will face many challenges behind the scenes. Anyway, if you feel on the personal interviews' basis that auto-discovery is a good option, do not hesitate to give it try. In the past when I used it successfully the average resource we had to spend for an analyser of a new application (means did not analysed before) was 5 man-working-day. This amount is not so large, but this is the amount of a stabilised environment prepared for analysis and a team collected enough experiences before.

Validation of data

The perpetual question is about how we can ensure that the content of architecture knowledge base is correct. There is no magic wand... First of all the cross references between application interviews will highlight mistakes and let you fix them.

The real validation is the real life usage! You have to be open for any kind of businesses to support: project design, infrastructure planning, monitoring, financial reporting of assets and so. Parallel to collect more users you have to clearly communicate, that the errors found are not problems, but the opportunity of becoming better. At the end the more channel using your data is the more valid data you have! 

Support projects

The changes of the architecture used to happen through projects. They can be "normal" projects which are delivering something new to the company or well organised maintenance changes about the infrastructure but anyway they could be identified in a well-operating organisation. I suggest you to prepare in your knowledge base function to support projects from demand on the go to production phase. Prepare an object - call it project - which is connected to the entities handled with creation, modification and retiring relations. Designing and maintaining project activities you will have a complete picture about all the things done by project and project-like activities. Assuming you selected a proper tool it will be possible to build project overview diagrams, project summary documents, task lists to follow up from architecture management prospective. Be aware, that the architecture management tool should not be a project portfolio management solution to handle resource allocations, timing and everydays' reporting. What you have to be prepared for to support recognising project interdependencies, let one project safely built on another one and finally to ensure data accuracy by proper change follow-up!

Closing message

Building continuous co-operation with your enterprise your knowledge base and consequently the architect team will be a trusted source and a trusted partner for everyone! The trust you earned enables one of the critical preconditions of the architect's main target: you will maximise the ability to change!

Log in to comment
© 2017 Architect Archers