Defining The Architect Role, Part 1: An Ontological View

August 09, 2022

According to the Google search (definitions provided by Oxford Languages

ar·chi·tect /ˈärkəˌtekt/ noun

  1. a person who designs buildings and in many cases also supervises their construction.

“the great Norman architect of Durham Cathedral”

  1. COMPUTING

a person who designs hardware, software, or networking applications and services of a specified type for a business or other organization. “we are seeking an experienced software architect to join our scientific computing team” 3. verb COMPUTING design and configure (a program or system). “few software packages were architected with Ethernet access in mind”

Similar: designer planner builder building consultant draftsman

Welp. That’s all there is to see here. Short article!


Naturally, I’m kidding.

There are books, articles, blogs, vlogs and possibly a feature length film entirely devoted to the conversations and debates surrounding the definition of an architect within technological pursuits.

There are some fantastic concepts throughout all of these materials, and despite the advancements in technology, the core concepts of many of these definitions and perspectives are centered largely around the same basic topics. This suggests that there is some truth within the brevity we can enjoy from the definition provided above. While simple definitions forego the necessity of details and specificity, there is incredible value to being able to provide high-level, general definitions. These buckets of classification are integral characteristics of complicated systems with many moving parts. In some cases these buckets provide stability of separate, but similar, concepts to allow for comparisons. Yet in other cases, these buckets act as barrier to concepts that aren’t related. This allows our limited brains relief from taxation from scope that we simply cannot provide a commanding sense of focus. In summary, the ability to stumble upon simple definitions provides sure footing to large, telescoping concepts that can quickly exceed our fleeting attention spans.

Transparently, I’m not a fan of the term. I don’t like using the words engineer or architect to describe these roles. In some respects, I feel that the “naming is hard” school of thought has misrepresented titling as esoteric, leading to the borrowing of nomenclature established for other purposes. Engineers and architects, in the traditional sense, built things that were intended to stand the test of time as opposed to the dynamic iteration presented in modern technology streams. I’ve always considered each and every person working in a technological field as a “capital-T Technologist.” Whether you are configuring applications for operations organizations, writing code, or designing something, you are all technologists aligned in some form towards a common goal.

However, a one-size-fits-all name isn’t pragmatic as it pertains to the reality of human resources, tech ladders skill assessments, core competency management and many of the other necessary mechanisms for building and scaling organizations. At some point, we have to descend into the details so that people leaders have a sandbox from which to establish expectations and accountabilities for the tasks that aggregate up to the organizational deliverables

Many architect roles today are preceded by a classifying noun. This classifier is a meta-term used to distinguish the efforts of a role as it is affixed to the direction and goals of a larger organization, business unit or team.

Those nouns are a construct that has resulted from the yin and yang of hiring managers, human resources teams and talent professionals over the course of many years. The result is a somewhat challenging landscape for younger versions of the aforementioned people: especially talent professionals.

Logistically, it is the kind of problem that is fixable on paper. However, it would require consensus of the entire industry and some form of rigid body to enforce it. History frowns on rigidity of this nature. In reality, people en masse are excellent at creating chaos and herculean messes. Serendipitously, we are also good at making sense of those messes and solving challenges… even the ones of our own creation.

I’ve divided part 1 into two sections

  1. I’m going to try to define what an architect is at a high level and provide three basic categories of architectural

work. 2. I’m going to compare and contrast an architect with an engineer to elucidate the differences so we can establish a bird’s eye view of accountability across technological quanta.

This is a scenic tour. It is hard to understand how a role fits in the greater organism of a business without understanding or questioning the organism. There are easter eggs, and I’m going to try to do my best to make sure there are links to all of them. I’d love for this to be a reference, but if it’s just a fun ride through the country with the windows rolled down on a breezy afternoon of the dog days of summer, that’s a worthy consolation!


Architect: The Job.

Architects are responsible for the design of something. Some architects might focus on one of the proposed categories, while others might be balanced in various ratios across two or all. These responsibilities are categorized based on the output being generated and the nature of the path and intent to get to that output.

It’s also important to recognize, that while these categories represent a useful form of decomposition for the roles in order to discuss them in terms of recruiting and hiring, there is also plenty of cross over and leakage within the definitions. Like most things that exist in nature, there are glorious imperfections! Despite the precision and detail required in technological careers, we must strive to become more comfortable with the flexibility and maneuverability that comes with agility of thought.

My goal is to do the most good by aiming for the normalized distribution of roles. For those outside the inter-quartile range, the tails, or even the sparse outliers, I apologize that I’ve missed you.

This exercise is not intended to establish some long term nomenclature or language, but rather to provide a temporary ontological view of architectural work that I believe helps frame the discussion relative to hiring, team building, scaling and other organizational efforts.

1. Architecting Something To Do: Process Artifact

Creating process used to be a top-down, ivory tower endeavor. It was the brainchild of senior leadership, created in conference rooms and monotonous cubicle farms to be disseminated through email and internal content management systems. Over time, bottoms-up approaches led to the integration of feedback and autonomous working models Old processes became challenged by those with intimate proximity to work being done. Eventually the continuous improvement paradigm wafted through the industry like the warm smell of pie baking on a crisp Autumn day, and the table was full once it was served!

As these concepts (Lean,Agile, DevOps, Lean Startup, and various sub-categories of each) grew and matured, adoption was varied. Early adopters may have been met with a greater degree of the pain coming from immature process. However they were also the contributors towards its maturity, and inevitably enjoyed a faster velocity once the experience matured from “it sucks!” to “it rocks!”. This catapulted those early adopters ahead of their competitors.

Late adopters and skeptics cite a number of reasons. Many of the case studies I’ve read outline two common circumstances: a fear of change, and classic ignorance. The latter is a fairly common human condition. We might read an article on something, form a judgement based on that information, and then fail to allow our judgement to evolve or mature with the object of the opinion. Time passes, and our opinion is based on the article we read quite some time ago, while the object has matured.

Regardless of the initial reasons, organizations often find themselves at a disadvantage. Some organizations have so much inertia due to the sheer mass (size) of the organization, that change is painfully slow. Many of these juggernauts are able to rumble forward due to the momentum of what was effective process prior to procedural innovation. In other words, if all things were equal, the disadvantage would result in the organization falling behind almost immediately. Realistically, all things aren’t equal, so they have a burn rate at which they can operate at that disadvantage before they begin to actually fall behind. Watching the history of leading tech companies from 2000 to present demonstrates this phenomenon.

This also notes some of the traits that Eric Ries observed when codifying the Lean Startup: smaller companies can change more easily than large companies.

Not all companies enjoy the ability to operate at a disadvantage for an extended period of time. Those organizations need some kind of spark to reignite their process so that they don’t fall far enough behind that they become obsolete. Entire career paths were constructed to facilitate some of these transformations. One of the first examples of a career that comes to mind is an Agile Coach.

The scope doesn’t have to be as vast as Agile or other software development life cycle practices. It can also be decomposed into smaller good practices. Trunk-Based Development? Test-Driven Development? Anything that falls into the categories of ways of working are potential areas for this kind of architecture. As we’ll discuss later there are tactical and strategic elements that play into architectural roles.

Realistically speaking, most architects will be involved in some shape or form with internal process, especially if the organization is supportive of the continuous improvement ideology. When writing requirements for a position and eventually staffing it, it is important to ask yourself:

  • Is process central or adjacent to the primary responsibilities of this position?
  • Are we inviting a change agent into the company?
  • will this person be focused on delivery something that will require

changes in order to be realized?

An important note about measuring process is that it is very relative. Even with perfectly executed surveys that collect data with precision, no loss, and a universally accepted representation of quantitative data, the measurement is only as useful as the population from which it originates. In other words, we ultimately measure process based on those who execute with the most desirable traits: velocity, quality, safety, etc. Those traits may be influenced differently by domain, size, region, and a number of other features, diluting the ability to compare and contrast process across organizations.

Change Agents: Architects of Process

A change agent, for the purposes of this article, is an external party who has been invited to affect change upon an organization. This person is being brought in because their experience and background is in concepts and skills that are either non-existent or sparse within the organization as it exists today. In some cases, the skills do exist, but they may exist in the leaves of the organizational hierarchy (i.e. junior individual contributors, lower levels of management) resulting in silence for fear of speaking against the existing process or culture.

Inviting a person to the organization in order to change existing process is an indirect statement that the existing process (and possibly the people who created them) is not effective, is stale or has some other attribute that requires it to be changed. The propensity for challenge is increased if the author(s) of the existing processes are still with the company. A role centered around transformation or change is a likely invitation for conflict.

I dropped the C-word. Some who are reading this probably heard war drums as soon as I mentioned conflict. Some of you also know that it doesn’t have to go down that road. Conflict can be managed, and even be positive if we address it as part of our strategy for change.

While on the subject, here are a few things to consider when creating a role of this nature:

It might be worth testing buy-in before creating the position.

It doesn't hurt (us) to send a canary in the mine. 
(It's not so great for the canary.)

Running surveys, compiling retro results, etc. might be a way to build an internal story to help guide you on a mission of change.

Do you have a plan to address conflict?

If this is only a checkbox on a task list, it is likely to fail spectacularly. Think about how hard it is for you as a single person to make a change. Now, consider what the effort looks like when you need an entire team or organization to change in harmony.

Conflict management when considered with organizational transformation is a slow burn. It involves people, who are wonderfully messy, emotional and unpredictable. Can you afford that time?

Time and completion are also interesting covariants. We rarely need full completion of a project immediately. Good problem solving requires the ability to decompose complex problems into smaller easily understood puzzles. Small changes are also easier for us to accommodate. Consider this when addressing a plan to mitigate conflict.

Baby steps!

At the end of the article I’m going to discuss strategy and tactics. This is a good topic to exercise both.

How entrenched is the previous process?

  • Are you a startup that hasn’t formalized any process?
  • Are you a Fortune 500 organization with 75,000+ employees who have been doing the same thing for 30 years?
  • How old is the average employee?
  • What are their backgrounds?
  • Do you have people who understand the changes you want to make already in your organization?
  • What is the average tenure of your employees?
  • What is the average tenure of your leadership?
  • Have there been previous attempts to change?
  • What were the results?
  • How did the organization behave?

Understanding the identity of your organization as it pertains to change is an important part of the journey. The larger question of this section is concerned with the magnitude of a vector. How strong is the existing polarity of your organization? Is this something a single person can manage, or is it something that is going to require drilling strategic holes, placed with mathematical precision across the organization to bring down a larger wall of resistance?

If this is a greater pursuit than a single person, then consider a broader hiring strategy. Forgive me for a moment for nibbling at my own dog food, but if the transformation behaves like old corroded battery terminals that just won’t budge, then don’t hesitate to reach out to a consulting firm.

That’s what we’re here for!

Are you willing to commit to change?

Changing an organization has subtext that we can’t ignore. There will be people in your organization who came to the organization because of the organizational identity that existed prior to the change you are attempting to make.

Some of those people are going to be supportive of the change immediately. Some of them are going to take some coaxing. Some of them will be resistant, quietly. Some of them will be resistant… not-quietly.

How does the change you are going to make impact your organizational vision, culture and values? I’ve never encountered a circumstance where the answer is “It doesn’t”. It might be small, it might even appear to be insignificant. Sometimes small changes have a larger effect than we anticipate.

You are going to have people in the organization who don’t want to come with you on the next phase of this journey. That’s ok. It is very important to handle this with compassion. Let them go. Help them if you can. Try your very best not to make them feel left behind.

In some cases, the changes we make aren’t just driven by the needs of the organization. Sometimes they are driven by an industry standard. Even in these cases, you can’t force someone to come along. You can certainly provide them data and justification for the changes. However, if they aren’t convinced that the industry is moving in the direction you are, the only thing you can do is let them find out for themselves.

This might seem off the beat and path, but I assure you it isn’t. Architects are leaders, which makes organizational culture one of their primary responsibilities. I believe that an organization’s culture begins a mile away from the front door. This means that even the most precursory recruitment discussions, sales relationships, and external vendor relationships are just as important as the relationship between peers or managers and direct reports.

Just the same, when an employee exits — for whatever reason — they should be allowed their dignity, privacy and a policy of assuming the best of intent. Today’s “immature kid” may be tomorrow’s “model leader”.

Change as a Side Dish: Architects who Affect Process

Alternative to change agents, are the architects who have one of the other two core focuses I’m going to discuss later. In these cases, process is something that they may or may not have to address as a part of their job function.

More often than not, this type of change is a cooperative effort. Architects work with product managers, engineering leadership, executives, and so on to create a multi-dimensional surface to change that blankets an organization.

This kind of change catalyst is more amorphous or organic than the singular notion of an agent. When I discussed change agents as a role, I mentioned that they were often brought in due to the lack (or sparseness) of skills/concept within an organization. This may only be true in part.

Some organizations that don’t possess the expertise or skills of a desired change, are able to drive transformative strategy without bringing in external influences. The difference is in attitude and openness. This is why I emphasized the likelihood of conflict in the previous section. When a single change agent is brought into a company it is often due to the fact that the company has an internally-fixated view of process. The employees are less likely to look outside of the box (or organization) for other views or perspectives.

Change by committee is easier to accomplish when you’ve built an organization with teams that are willing to learn new things, are open to new ideas, and seek inspiration from multiple sources. This type of generative culture is often considered to be the ideal culture, because rather than being bound to a specific implementation of an idea or process, it is focused on organizational relationship with performance-focused outcomes. It shirks pathology and bureaucracy.


I want to address what might appear as a sprinkle of bias towards these concepts. As I mentioned earlier, I’m targeting the peak of the bell curve. Most organizations that are able to change effectively don’t create roles centered around transformation, because they have been able to achieve it without doing so. (I alluded to this towards the end of the previous section.)

A single change agent is expensive. The skill set is hard to find because it requires extensive experience, understanding of what good and bad process looks like (in many different domains and organizations), and it requires substantial emotional intelligence and behavioral competence. Beyond that, the landscape of an organization that is facing these challenges is likely not to be an easy (or even welcome) environment to walk into. Assuming you are able to find someone who can do the job, it is still a fractional group among those who can who are likely to be willing.

Beyond the expense of staffing the role, any unilateral change effort is going to collide with business as usual, resulting in temporary loss of productivity (the length and degree of which are proportional to the cooperation of the existing organization and the efficacy of the change agent).

In part 2, I’m going to discuss the concept of a Transformation Architect. These roles are divided by the nature of the result, being technical or non-technical. Much of the efforts discussed here have to do with the challenges of human interaction.

2. Architecting Something To Be Created: Autonomous Artifact

When we think of architects, we think of them as people who bring something into existence. There is something creative associated with the title or role. A problem is presented, and the architect is responsible for providing a solution.

In this specific case, a new autonomous artifact is being created.

What is an artifact?

For the purposes of this definition we are focusing not only on the end result (i.e. what is being provided), but also the effort and resources put into the creation of it. Technically speaking, all three categories of the ontology I’m providing (Process Artifact, Autonomous Artifact, Assembled Artifact) create an artifact. The first is easily distinguishable from the latter two, because it is a procedure or way of working. The latter two are harder to tell apart, because they might very well be used by end users in the same manner. In order to tell the difference you have to pop the hood.

I understand this might be a departure from common definitions, however for the purposes of defining roles we’re not just concerned with Oz, but which yellow brick road we’ve taken to get there.

What does “new” mean?

For a moment let’s think about a car. When a new model of a car is designed, it becomes the template for future iterations of that model. In fact, many models are already derived from other models. One of my favorite cars is the Plymouth Barracuda. The first-generation of that model was released in 1964. Each year, a new version of that model was released until it was discontinued after 1974.

Certain traits of the vehicle were modified over time, and others remained the same. In fact, the body of the car itself was derived from the pre-existing Chrysler “A-body” platform.

This isn’t unlike the pattern orientation associated with many architects. We beg, borrow and steal patterns from across the industry that have a demonstrated capability to solve problems that have the same dimensions as the ones we are solving.

When I started writing this, I initially wrote same domain instead of same dimensions. However, it occurred to me that many breakthroughs have come from out-of-the box thinking that applies patterns created in the context of one domain to problems of similar shape and size from other domains.

Invention vs. Iteration

In terms of defining new, invention is an extreme circumstance where we are ultimately plodding through territory that has never been visited. We are approaching problems that have characteristics that aren’t addressed by existing patterns and schools of thought. To reach back to the ca” example above, this might have been the creation of the Chrysler A platform, which became the base for several dozen of Chrysler’s cars for roughly thirty years.

Invention yields to iteration, where existing patterns are optimized or compared with alternative solutions. Each of the cars that were constructed atop that platform discovered issues, gaps, outliers and oddities with the design that needed to be addressed. Some of those issues were systematic, requiring a change for all of the vehicles derived from the platform. In other cases, the issues were more localized to the specific model due to it’s specification.

Just as a new car can be created wholly from pre-existing patterns, templates and ideas, a newly designed artifact can be constructed entirely of well-scrutinized patterns and approaches while still being considered new.

I’m going to use another “I” word. Innovation. Those of us learning from the brilliance of times gone past are often referred to as “standing on the shoulders of giants”. We are building on the tools and ideas that were left for us, and hopefully planting the seed for the foundation of innovation for future generations.

Where do we draw the line?

The reason to discuss “newness” will become more apparent when I discuss the Autonomous Artifact Test.

As mentioned in the preceding section, we are likely to be utilizing frameworks, tools, libraries, patterns etc. that already exist as we execute to create an artifact. Is there a threshold or line of creational effort that must be reached for the artifact to have earned the lofty adjective: new?

Historically, that line has been drawn by the word code. Artifacts were often the brain child of software developers, comprised of many lines of code. In contrast, assembly was largely a paste-and-glue effort performed through scripts and configuration.

This brief sentence is often one of the problems differentiating those tasks. I don’t want to dip into a full scale assault onto the traditional development/operations friction, but a quick pit stop is warranted. Development teams often dismiss the complexity and effort that goes into operation work, just as operations teams often dismiss the weeds of software development. Despite some levels of crossover between their roles, they are ultimately two very different categories of technologist.

Scripts and configuration files (especially 10-15 years ago) were possibly an order of magnitude more terse than lines of code. There was little to no boilerplate, or scaffolding for object orientation, design patterns etc. A well written script is ultimately a checklist in executable form. Many of these scripts were migrated from manually implemented runbooks. Over time, they evolved to have controls to stop on failure or prompt for different patterns of execution. However, they usually don’t demonstrate a commanding knowledge of algorithms, data structures etc.

It is beyond the scope of this article to dive into this further, but it is important to understand how very deep the challenges between operations and engineering teams. Their goals are in direct conflict. The nature of their work is close enough to tempt the assertion of assumptions and conjecture about the other. Even the psychology of people driven to each career has certain characteristics that make conflict a possibility.

What’s Code?

Code is such a vague term, even in software, that it is very challenging to have a fruitful, pointed discussion where the word itself serves as an effective identifier. I’d rather talk about ‘why is code’…

Low Code? No Code? These concepts actually require code. They don’t describe some magical tool that generates code from brainwaves. They are tools designed to compete with application frameworks to lower the level of entry into creation space. The very first web apps were written in low-level languages, back when we weren’t very good at creating programming languages (code!!!). Eventually languages evolved to get better at certain categories of … code. Each generational iteration led to better languages. However, the bar of entry was always an understanding of coding and software. These newer Low/No Code tools put the power of creation into the hands of people who might have a better understanding of the business, domain space, specialized customer knowledge and so on.

Software development has become increasingly simple. Libraries like sklearn, nltk, pandas, numpy etc. have made Python a scientific, data-wrangling, AI/ML powerhouse. Tools like Project Lombok and frameworks like Spring, Micronaut and Quarkus have accelerated developer productivity by providing service harnesses to abstract away non-business focused code, abating tiresome boilerplate, and providing declarative capabilities using annotations.

Infrastructure has exploded in every direction. Configuration management, deployment and maintenance has become easier and easier with declarative YAML or JSON driven manifests. HashiCorp and the CNCF (Cloud Native Computing Foundation) have an entire business model focused on (mostly) simplifying these areas. However, just the same, there are also technologies that require substantial development in areas that might have once been driven with less scrutinized code. As an example, Kubernetes has led to the creation of a number of new standards surrounding compute, storage, and other technical features that have required adjustments to accommodate containerization, orchestration, immutable infrastructure etc.

More often than not, our focus and goals aren’t to spend money, time and effort in the plumbing of a system, but rather to be focused on the specific problems of the organization or business. There is almost a tide-like ebb and flow of complexity as new concepts emerge in the raw and older concepts mature and simplify.

This guy might have an idea. For technologists, code is nothing more than a tool in a toolkit.

What is code… doing?

What is that code (in whatever shape or form it appears) doing? I believe that this is the single most important differentiator. Does the code being written have a direct relationship to the problem(s)) defined by the mission statement of the organization trying to provide a product or service in pursuit of the solution of said problem?

If the answer is yes, then this is an autonomous architect.

If the answer is no, then this may not be an autonomous architect.

The easiest way to differentiate this is with what I call the “Autonomous Artifact Test”

If the sole purpose of the code is NOT only to assemble or 
otherwise integrate two or more pre-exisitng artifacts.

AND 

The code can not exist on its own outside of the environment for 
which it was created.

It can not stand on its own two legs, and is therefore 
not an artifact. 

Let’s investigate the contrary. Consider the case of plumbing and infrastructure. Many of Google’s most famous technologies (i.e. BigTable, GFS) were never directly released for public consumption. Those technologies were created somewhere along their journey for providing optimizations for the search engine.

Many organizations today, even large ones, don’t go to the same lengths to solve problems for their customers. Instead, they leverage the best of breed technologies available at any given time in order to optimize their business expenses. Distributed systems are hard, especially the further you swim from the surface. The expertise required to build those types of environments isn’t easy to find, and expensive to boot.

This is one of the most critical reasons that cloud computing has become the force that it has. It subtracts much of the technical complexity that you don’t need (or want) to be involved with so that you can focus all of your human capital on your mission statement and business problems. (It is also better for technology as a whole, because it accelerates innovation and discovery.)

So, how do we come up with a definitive answer? Autonomous or Not? I look at the relationship of the architect to the artifact. I deliberately used Google’s previously mentioned artifacts as an example, because there are open source projects (HBase and HDFS respectively) that were created and inspired by the BigTable and GFS by Doug Cutting and Mike Cafarella while at Yahoo. Cutting is also the co-founder of Lucene which is the underlying core of Elasticsearch. I outlined this relationship as another example of how the creation of one tool (Lucene) might help facilitate the development of a tool (Elasticsearch) that provides simplicity and better abstraction. The beauty of this process is that we aren’t throwing out the baby with the bath water or boiling oceans. Instead we are layering, refactoring, and reusing. While I’m geeking out, the SSTable (Sorted Strings Table) is a central storage concept to LSM( Log Structured Merge) Tree data stores. This article goes into its relationship to BigTable and Cassandra. I cited this article which might help trace back Cutting’s involvement with BigTable. The SSTable structure was also used for Lucene, suggesting that he used this concept to bootstrap many different tools.

Each of these technologies passes the Autonomous Artifact test, suggesting that the relationship can be compositional. He created (with help?) a tool (an autonomous artifact) which was used to create further tools (new autonomous artifacts).

If you step back to get a wider view, it’s interesting to see how so many different technologies were almost quite literally grown from the same seed.

Speaking of seeds to branches, the first design document for HDFS was created by Dhruba Borthakur while also at Yahoo, who would later be known for his work on RocksDB while at Facebook. RocksDB also happens to be an LSM based distributed datastore leveraging SSTables. Many of you might not have heard of RocksDB, however if you flip through the wikipedia article, you’ll note that it is the storage engine for many well known data stores.

Why does this matter? When Yahoo was bootstrapping Hadoop the business wasn’t focused on storage, neither was Google when it created BigTable and GFS, nor was Facebook when it created RocksDB. These were all means to an end to gain an edge. The companies had evolved into more than just an organization that solves business problems. They emerged as technology companies that optimize and innovate the way those problems are solved. This is one of the ways to prevent commoditization (or at the very least to delay it for a while). The compositional nature of artifacts I mentioned above is a case of “I have to make a thing to make the other thing.” or “I have to make a thing to improve the other thing.”

Technology, as Shrek says, is like ogres. They have layers. This is the nature of the autonomous artifact test, and why it can’t be evaluated on it’s relationship to the business alone.

At the time BigTable and GFS were created, it is likely that the impact on technology as a whole couldn’t have been accurately predicted to be what it has been. Maybe these companies knew that they were trying to innovate. Maybe they were just trying to solve a problem, because they wanted to solve it better than the existing technology allowed for. Maybe companies set out to be “technology organizations”, or maybe they just get their organically due to a drive to excel. At some point Google became more than “find things on the internet!“. The following links don’t go to the organizations, but rather links to technologies that they are responsible for providing: Facebook? Uber, LinkedIn and again, Lyft, Netflix? and so many more.

These projects all pass the Autonomous Artifact Test.


As mentioned in the first category, an architect who creates autonomous artifacts may not be the only function of the role. The purpose of the ontology is define some basic high level categories of work. It is not only possible, but highly probable that there will need to be changes to existing processes, and some form of assembly or integration along the way. This is especially true with the prevalence of modern distributed systems architectures where old, middle-aged and new components are constantly being sunset and born at different rates.

If I can revisit the initial question What is an artifact?, I like to think of it as a living, breathing animal of the technology world. It is autonomous… and it can bite you if you don’t feed it regularly.

3. Architecting Something To Be Put Together: Assembly Artifact

Last, but certainly not least is the effort of assembly. This category of architect is more concerned about piecing things together, and less about the creation of something new. If the assembly results in something new, that’s ok, assembly as a concept doesn’t prohibit this. The point is less about the end state of what is being created, and more about the path to getting there.

This is a more straightforward concept, but before we leave it, I’d like to talk about one of the more interesting roles that might be earmarked for this category. Classic “systems integration” is often the creation of a hardware artifact or device from a collection of sub components. An example might be your laptop, or a NAS system, a guitar amplifier, a smart phone and so on. In many cases, companies purchase all of the sub-components of the aforementioned devices without manufacturing any of them on their own.

Those very same companies hire someone (usually several someones) to design the assembly and integration of these parts into a final artifact.

The reason I mention this example is that in many cases, their efforts pass the “Autonomous Artifacts Test”. Much of the code created for these systems might be custom firmware, BIOS, bloatware (Smart Phones!) and so on. While these bits and pieces of code may be optimized for the system they are being designed for, many of them can be used outside of that environment.

So why did I discuss something which breaks my own model?:

First and foremost, as a reminder that this model is intended to be organic and elastic. It is simply impossible to address every possible circumstance. The Autonomous Artifact Test is a very basic way to differentiate between two loosely defined categories of design work. The line is not absolute, and that’s ok.

Additionally, I intentionally prefixed the words “system integration” above with “classic”. Books and older technical texts use this term, but it has somewhat fallen out of fashion over the past few decades. Given common usage (or lack thereof), I feel some comfort in referring to a classic “systems integration” as an autonomous artifact architecture.

Like the previous two categories, assembly or integration can be a part of a multi-faceted role, or it could very much be the entirety of a role. Assembly in the case of process or artifacts is often a set of tasks that occurs as part of a complicated design process. Distributed applications might require integration and assembly of the various artifacts. A process that requires the coordination of multiple business units may have smaller procedures that aggregate up to a larger, more comprehensive process.

There are cases where integration or assembly happens on its own. These efforts are very common when working with infrastructure, cloud service providers, and other pre-existing artifacts being glued together for some purpose.


Thanks for joining me for this word salad extravaganza that is part 1 of my foray into a discussion about architects.

In part 2, I’m going to discuss how architecture and engineering skill sets can be applied to a technology organization in various ways to complement each other for optimal outcomes.


Profile picture

Written by Ed Mangini
A Technology Blog about useful stuff.