What is the customer information control system CICS

CICS - CIA invert

IBM CICS (Customer Information Control System) is a shared family of voice application servers, online transaction management and connections for applications on IBM mainframe systems under z / OS and z / VSE.

the CICS family are offering too

The CICS Transaction Server (CICS TS) is at the top of the CICS family and Northern Services. These services can be as the general authorization services and also to be for programmers.

For CICS functions that are written in a set of programming languages ​​and are ready by CICS to use language extensions to manage how to belong to database connections and terminals, or to extend functions such as web services. CICS owns all of its support so that all permissions can be withdrawn if any pending reason has failed.

CICS TS has the highest profile among large financial institutions such as banks and insurance companies, that many Fortune 500 companies and government agencies are managed by CICS. Other features companies can also help CICS TS and other CICS family products. CICS can be found behind the scenes after the fact, in bank teller instructions, ATM systems, industrial production control systems, insurance instructions, and many other types of interactive applications.

On the possible CICS updates in APIs, frameworks, editors and build tools and on updates in the key security, failure safety and management. In stock, newer CICS TS promises that cover web services and Java, event processing, Atom feeds and

CICS TS for z / OS 5.6 was released on April 7, 2020 and was in common use on June 12, 2020. This new version has the reputation of CICS TS as the premium mixed language of IBM application servers.


CICS was preceded by a one-threaded feel transaction processing system. IBM MTCS. An 'MTCS-CICS bridge' was dealt with retrospectively so that these sequences can belong under CICS without the following application programs.

CICS was developed in the USA at an IBM Development Center in Des Plaines, Illinois, starting in 1966 to meet the needs of the public utility industry. The first CICS product was created in 1968 with the name PU-CICS (Public Utility Customer Information Control System). It immediately became clear that it was applicable to many other industries, so the prefix for company administration on July 8, 1969, not long after the IMS database became a management system.

In recent years, CICS was developed in Palo Alto and was considered a less personal "smaller" product than IMS, which IBM then saw as more strategic. Customer pressure keeps it alive. When IBM in 1974 decided to stop developing CICS to focus on IMS, the IBM Hursley site in the UK improved responsibility for CICS development, which had just stopped work on PL /. I compilers and identify many of the same customers as CICS. The core of development work today is in India, China, Russia, Australia and the States States.

Early development

CICS only deals with IBM branded devices such as the 1965 IBM 2741 Selectric (golf ball) typewriter required terminal. The 1964 IBM 2260 and 1972 IBM 3270 video display terminals were treated differently.

In the early days of IBM mainframes, computer software was free - bundled with computer hardware at no extra cost. That requires OS / 360 and application support software like CICS was available to IBM customers long before open source software. Companies such as Standard Oil of Indiana (Amoco) made important contributions to CICS.

The IBM Des Plaines team tried to expect for non-IBM terminals like the ASCII Teletype Model 33 ASR will, but the small, low-budget software development team I didn't sell myself the $ 100 a month hardware to make them to test. IBM executives mistakenly believed that the future was with batch processing under the tradition of traditional punch cards.

how is treated in the past. The 1965 IBM IBM Airline Control Program, used by American Airlines' Saber computer reservation system) for the necessary data access and update on customer information for their operator (without waiting for punch card systems to batch process overnight).

When CICS was delivered with ASR contracts for Teletyp Model 33 and Amoco, the entire OS / 360 program of responsibility (non-CICS application programs) became. The reason for the CICS Terminal Control Help (TCP - the heart of CICS) and part of OS / 360 is caused by the Amoco manufacturing company in Tulsa Oklahoma. One IBM was then owned for free distribution and another.

During the years in which CICS and IBM achieved sales of over 60 billion US dollars for new hardware and their most successful mainframe software product.

1972 War CICS in three steps - DOS-ENTRY (program number 5736-XX6) for DOS / 360 machines with a lot of memory, DOS-STANDARD (program number 5736-). XX7) for DOS / 360 computers with more memory and OS-STANDARD V2 (program number 5734-XX7) for the particular computer on which OS / 360 was installed.

In early 1970 a number of Ben Riggin's key developments (based on the main architecture of the early publications) moved to the IBM Palo Alto Fort after the development of CICS development. IBM executives only saw the value of software as a revenue-generating product after federal law unbundled heard of software. In 1980 the administration of IBM did not heed the strong rights of Ben Riggins that IBM had a personal EBCDIC -based management and a loss of a microprocessor for the administration in IBM-related personal computers as an intelligent CICS terminal (distinguishes the incompatible Intel chip and immature ASCII - based Microsoft 1980 DOS).

The codes system module System generation was (sysgen), called CICSGEN, to get values ​​for bed. The process process it every customer who exempted from CICS himself for any functions he has not heard of, e.g. B. the transport of equipment for non-remote terminal types.

CICS owes its early popularity to its relatively eff.

Z notation

Part of CICS was formalized in collaboration and in the 1990s in collaboration with Oxford University Computing Laboratory under the Z notation the leadership of Tony Hoare. This work won the Queen's Award for Technology Advocacy.

CICS as a distributed file server

In 1986, IBM defined the CICS services for the record-oriented file services and Distributed Data Management Architecture (DDM). In this way, program on remote computers with network connection support, trust, and trust that were only supported in the CICS / MVS and CICS / VSE Transaction Processing Migration.

In recent transfers from CICS this is done for DDM. The fulfillment of the DDM settings of CICS z / OS was discontinued at the end of 2003 and from version 5.2 onwards, CICS for z / OS became. In CICS TS for z / VSE, support for DDM has been stabilized at V1.1.1 level, with control over, in a common version. As of CICS for z / VSE 2.1, CICS / DDM is not carried out.

CICS and the World Wide Web

CICS Transaction Servers have a native HTTP interface, better version 1.2, together with a web bridge technology for bypassing green screen terminal users with an HTML facade. CICS web and document APIs were used in CICS TS V1.3.

The CICS TS offsets 2.1 to 2.3 rights to the introduction of the functions CORBA and EJB in CICS and offer new possibilities for the integration of CICS assets in various application component models. These causes are checked on the hosting of Java processing in CICS. In the Java hosting environment in which your Transaction Server V5.1 releases were changed, the rights to embed the

CICS TS V3.1 has added native authorization of SOAP and WSDL technologies for CICS as well as client-side HTTP APIs for social communication. These twin technologies did not have an easier integration of CICS components into other company directives and in parts. Tools will be included with the appropriate CICS programs written in languages ​​such as COBOL that can be converted to WSDL-defined web services with other or without program changes. This technology was heard to be a personal perception of CICS.

In CICS TS V4.1 and V4.2, the Webkonne further development, authorization and original protocols were ATOM.

Many of the new web technologies were purchased as one of the product version for control of the models provided by the CICS among the others. This means it is the first time adopters to provide constructive feedback that the design is based on the personal technology. Examples of this are the SoP for CICS technology preview SupportPac for TS V2.2 or the ATOM SupportPac for TS V3.1. This approach was used to change the JSON negotiation for CICS TS V4.2, a technology that was later used in CICS TS V5.2.

The JSON technology in the CICS entitlement SOAP technology that is used in the CICS hosted program to get a modern facade. JSON technology was tested in z / OS Connect Enterprise Edition, an IBM product for managing JSON APIs that are used to enable assets from the mainframe subsystems.

Many partner products were also used to interact with CICS. Among the common examples of connecting the CICS transaction gateways to connect to CICS from JCA -compatible Java application servers and IBM DataPower appliances to filter web traffic, before these CICS services.

Modern versions of CICS offer many, different, and new software assets to be managed in a contractually acceptable manner. CICS assets can be accessed from remote systems, and remote systems can be accessed. User identity and transaction context can be. RESTful APIs can and will be used. Devices, users, and servers can have standards-based functions with CICS permissions. The IBM WebSphere Liberty environment in CICS will be the rapid introduction of new resources.


By January 1985, a consulting firm founded in 1969 that became known for "massive online systems" for Hilton Hotels, FTD Florists, Amtrak, and Budget Rent-a-Car, was MicroCICS. The focus was on the IBM XT / 370 and IBM AT / 370.

CICS family

. If the CICS transaction server, the CICS transaction server, the connectors (CICS Transaction Gateway) and the CICS tools.

CICS on understandable rights - not on mainframes - is called IBM TXSeries. TXSeries is a contractual transaction processing middleware. It runs C, C ++, COBOL, Java ™ and PL / I applications in the cloud and in the data centers. TXSeries is available on the largest AIX, Linux x86, Windows, Solaris and HP-UX. CICS is also possible in other ways, belongs on IBM i and OS / 2. The z / OS rights (d. H. CICS transaction server for z / OS) are the most popular and important among the others.

Two tasks from CICS were required for VM / CMS, both were also heard. In 1986, IBM became CICS / CMS, a single-user version of CICS that was used for development development developments. The applications were broadcast for production on an MVS or DOS / VS system. Sorge, 1988, runs IBM CICS / VM. CICS / VM was designed for use on the IBM 9370 Control, a low-end mainframe for departmental management. IBM positioned CICS / VM on departmental or branch office mainframes for use in conjunction with a central mainframe on which CICS for MVS is attempted.

CICS tools

is managed, managed and analyzed by CICS tools. This means performance management and the management and administration of CICS resources. In 2015, the four different CICS tools (and the CICS optimization solution package for z / OS) were updated with the future of CICS Transaction Server for z / OS 5.3. The four most important CICS tools: CICS Interdependency Analyzer for z / OS, CICS Deployment Assistant for z / OS, CICS Performance Analyzer for z / OS and CICS Configuration Manager for z / OS.


Programming guidelines

Application programs for interactive relationship transfers with official relationships in the relationship quasi - being able to re-enter the work sequences of simultaneous threads. A software coding error in an application can cause any user from the system. The modular structure of CICS re-entry / reuse control access authorization, that with some "cleaning up", several users with additional services on a computer with only 32 KB expensive magnetic core physical memory).

CICS application programmers administrative workforce management to design their rights, such as how. One possible technique was to get your own functions, which should not be owned more than 4,096 bytes or 4 KB, so that CICS could reuse the memory of a program that is not currently being used for another program or other application memory. When virtual memory was added to the OS / 360 approaches in 1972, the 4K strategy became even less about unproductive access for paging and belonging.

The efficiency of compiled COBOL and PL / I language programs at the high level related to control. Many CICS application programs were written in assembly language, for which COBOL and PL / I also became available.

Since the hardware resources of the trust and trust years are expensive and scarce, a licensing "game" was developed under taxes for system optimization. When critical path became code, one snippet of code was passed from another to another another. Each person must ensure that (a) the number of code bytes required or (b) the number of rights CPU cycles. Younger disabilities have learned from more experienced mentors. If you couldn't do any (a) or (b), the code was considered rights optimized and they were passed over to other snippets. The CICS rights very slowly (or not at all).

Since application programs of many simultaneous threads were given equal rights, the use of static variables that are contained in a program hint was made. , through (convention only).

Unfortunately, many of the "rights" rights granted by COBOL programmers, the internals of their program, or the restrictions restricted compilation options were not used. This "non-recurring" code, which was often improper, was too tight to include memory accesses and crashes of the full CICS system.

Originally, the entire partition or multiple virtual storage (MVS) region with fewer storage protection keys is code enabled by the CICS kernel. Program corruption and damage to the CICS control block became a cause of system failures. A software bug in an application program can write to the memory (code or data) of one or all of the previous application transactions executed. Finding the erroneous application code for the control that is erroneous in time.

These right rights with the new releases of CICS over a period of more than 20 years, despite their severity and the fact that CICS quality of the highest quality was very important and scarce. They were written in TS V3.3, V4.1 and V5.2 with the functions of memory protection, transaction isolation and subspace management, the hardware functions of trust to protect the application code and the data information of the same address space, also the applications does not belong to belong to become. CICS application transactions remain important to many personal utility companies, large banks, and other multi-billion dollar financial institutions.

In addition, it is possible to provide a measure of protection from regulatory control, compensated for a test under a controlled view of the, which is also used to check test and debug functions.

Programming at macro level

The CICS initial consultation only conducted application transaction programs written in IBM 360 Assembler. The episodes of COBOL and PL / I were added years later. Assembler macros were provided if the assembler responsible for the CICS services existed. This is the caseHörner TYPE = READ, DATASET = myfile, TYPOPER = UPDATE, .... etc.

This means to the better terminology "CICS at the macro level ".

When receiving high-level language support, the macros have been improved and the code converted by a pre-compiler that owns macros to their COBOL or PL / I CALL heard equivalents. Thus the introduction of an HLL application was detrimental to an "authorized" compilation - output from the preprocessor, which was also fed into the HLL compiler.

COBOL interests : In contrast to PL / I Nord IBM COBOL-No manipulation of pointers (interests).To give COBOL programmers access to CICS control blocks and memory, the designer resorted to a hack. The COBOL conversation segment was used for cross-program communication, e.g. B. for parameter transfer. The compiler belongs to a list of trusts that each is used as the base locator for linkage (BLL) and is purchased by the program called in the list. The first BLL belongs to the first element in the connection section and so on. With CICS, the programmer can refer to these new ones and make them compensate for the address of the list as the first argument and pass the program. The BLLs can then be seen dynamically by CICS or by the application.

Programming at command level

During the years of the war, IBM at Hursley Park created a version of CICS that was the "CICS at the command level" that programmed the program, but the ability to get a feel for the new API style.

A typical command-level call would look like this:


The values ​​in the SEND MAPSET command are values ​​of the names used in the first DFHMSD macro in the unconscious map definition for the MAPSET argument and the DFHMSI macro for the MAP argument. This is pre-negotiated by a precompiled batch translation stage that converts the Authorized Instructions (EXECs) into invocation instructions in a stub subroutine. Preparing application expectancy for functionality also works in two steps. It was possible to write applications in "mixed mode", even if at the macro level as well as at the command level.

At the beginning of the execution, the commands on command level were converted with a contact translator "The EXEC interface program" into the old call on macro level, which then does not refer to the check. When the CICS kernel was rewritten for TS V3, the war EXEC CICS was the individual rights to program CICS applications, as many of the others belong to themselves.

Increment conversion

Heard in the early 1990s CICS at the command level only . IBM has also hired applications for macro-level application programs that are written for the public descriptions. This means that many application programs have been converted or rewritten to only issue command-level EXEC commands.

At that point there was a global entitlement that had been in production for many years for many years. The rewriting that goes with new responsibilities without adding the new functions. There were a number of contracts pending Application-Owning Regions (AORs) for CICS V2 regions to have macrocode effects for many years after the move to V3.

It was also possible to use old program at the macro level using conversion software such as the CICS command.

owned by APT International. New programming styles

One of the tasks heard by the CICS transaction server is the number of programming styles.

CICS Transaction Server version 2.1. CICS Transaction Server version 2.2 manages the Software Developers Toolkit. CICS North development containers like the WebSphere family of products from IBM, relationships EJB applications between CICS and Websphere are portable and there are various tools for developing and managing EJB applications.

must also have received newer CICS restrictions of the focus, stored various application programs in modern interfaces, so that long-established business functions can be retained in more modern services. This includes WSDL, SOAP and JSON interfaces, the legacy code around which a web or mobile application can retrieve and receive the core business objects without having to rewrite the back-end functions well.


A CICS violation is a series of processes. This is the case when it comes to considering the inventory or entering a debit or credit to an account. A key characteristic of a transaction is that it should be atomic. On IBM System z servers, increased CICS authorization of representatives per proxy and is therefore a mainstay of enterprise computing.

CICS application rights that can be done in programming languages ​​include COBOL, PL / I, C, C ++, IBM Basic Assembly Language, REXX, and Java.

Each CICS program is initiated with a transaction ID. CICS contracts are changed as a construct required card, as a module that was made with assembler macros (BMS) or tools from third parties. CICS screens may contain text that is heard, colors, and / or flashes depending on the type of terminal used. For an example of how a map can be developed using COBOL, see. The end user enters data made to the program by receiving a card from CICS.


Looking ahead, the responsibilities for a command parameter must be kept in quotation marks and some will not be quoted depending on which one is being referenced. The most important programmers code from a reference book until they get the "hang" or concept of what personal consequences will be obtained, or they have a "prepackaged template" in which they have sample code that they simply copy and paste and then know you the values.

Example of BMS card code

Basic mapping of the screen format through assembler macros such as the consequences. This was put together to generate both the physical map set - a load module in a CICS load library - and a symbolic map set - a structure definition or DSECT in PL / I, COBOL, assembler, etc., which were copied into the source program.



In the z / OS environment, a CICS installation comprises one or more Regions (commonly referred to as the "CICS region"), distributed across one or more z / OS system images. Although interactive transactions are processed, each CICS region is usually started as a batch address space with standard JCL statements: this job runs indefinitely until shutdown. Alternatively, each CICS region can be started as. Whether it is a batch job or a started task, CICS regions can run for days, weeks, or even months before being shut down for maintenance (MVS or CICS). On restart, a parameter determines whether the start should be "cold" (no recovery) or "warm" / "emergency" (through warm shutdown or restart from the log after a crash). The cold start of large CICS regions with many resources can take a long time because all definitions are processed again.

Installations are divided into multiple address spaces for a variety of reasons; B .:

  • Application separation,
  • Segregation of duties,
  • Avoid the limitations of the workload capacity of a single region, address space or mainframe instance in the case of az / OS SysPlex.

A typical installation consists of several different applications that make up a service. Each service typically has a number of Terminal Owning Regions (TORs) that route transactions to multiple Application Owning Regions (AORs), although other topologies are possible. For example, the AORs may not be performing file I / O. Instead, there would be a "File-Owning Region" (FOR) that does the file I / O on behalf of transactions in the AOR - since a VSAM file could only support recoverable write access from one address space at this point in time Time.

But not all CICS applications use VSAM as the primary data source (or historically another single address space at a time data store like CA Datacom) - many use either IMS / DB or Db2 as the database and / or MQ as the queue manager. In all of these cases, TORs can balance transactions on groups of AORs which then use the shared databases / queues directly. CICS supports two-phase XA commit between data stores so that transactions spanning, for example, MQ, VSAM / RLS, and Db2 can be made with ACID properties.

CICS supports distributed transactions using the SNA LU6.2 protocol between address spaces that can run on the same or different clusters. This enables ACID updates of multiple data stores through the collaboration of distributed applications. In practice there are problems with it if a system or communication error occurs, since the transaction disposition (backout or commit) can be questionable if one of the communication nodes has not been restored. Therefore, the use of these facilities has never been very widespread.

Sysplex utilization

At the time of CICS ESA V3.2, in the early 1990s, IBM faced the challenge of getting CICS to take advantage of the new zOS Sysplex mainframe line.

The Sysplex should be based on CMOS (Complementary Metal Oxide Silicon) and not on the existing ECL (Emitter Coupled Logic) hardware. The cost of scaling the mainframe-unique ECL was much higher than CMOS which was being developed by a keiretsuwith high-volume use cases such as Sony PlayStation, in order to lower the unit costs of CPUs of each generation. The ECL was also expensive to operate for users because the gate-drain current generated so much heat that the CPU had to be packaged in a special module called a thermal conduction module (TCM) that had inert gas pistons and was installed for high volume cooling had to cool water. However, the CPU speed of the air-cooled CMOS technology was initially much slower than that of the ECL (especially the boxes available from mainframe clone makers Amdahl and Hitachi). This was particularly true of IBM in the CICS context, as almost all of the largest mainframe customers were running CICS and for many of them it was the primary mainframe workload.

In order to achieve the same overall transaction throughput on a Sysplex, several boxes would have to be used in parallel for each workload. However, due to its semi-reentrant application programming model, a CICS address space could not use more than approximately 1.5 processors on a box - even when using MVS subtasks. Without this, if they expanded their CICS workloads, these customers would be more likely to move to the competition than to Sysplex. There was considerable debate within IBM as to whether the right approach would be to break application forward compatibility and move to a model like IMS / DC that was fully re-entrant, or the approach that customers had taken to more expand Take full advantage of the power of a single mainframe with Multi-Region Operation (MRO).

Eventually the second route was taken after consulting the CICS user community and vehemently opposed to breaking forward compatibility as at that point they had the prospect of Y2K and saw the value in rewriting rather than testing millions of lines with primarily COBOL, PL / 1 or assembler code.

The structure recommended by IBM for CICS under Sysplex was to place at least one CICS terminal ownership region on each Sysplex node, which would distribute transactions to many Application Owning Regions (AORs) across the Sysplex. When these applications needed to access shared resources, they either used a sysplex-exploiting data store (e.g. Db2 or IMS / DB) or focused their resource requirements by shipping function to Singular-per-Resource Resource Owing Regions (RORs), including File Owning Regions (FORs) for VSAM and CICS data tables, Queue Owning Regions (QORs) for MQ, CICS Transient Data (TD) and CICS Temporary Storage (TS). This maintained compatibility for legacy applications at the expense of the operational complexity of configuring and managing many CICS regions.

In later versions and versions, CICS was able to use new Sysplex exploiting functions in VSAM / RLS, MQ for zOS and insert its own data table, TD and TS resources into the architectural shared resource manager for Sysplex -> the Coupling Facility or CF, without the need for most of the RORs. The CF provides an associated view of resources, including a shared time base, buffer pools, locks and counters, with hardware messaging support that makes sharing resources across the Sysplex more efficient than querying and reliable (using a semi-synchronized backup CF in case of failure).

At that point, the CMOS line had individual boxes that exceeded the performance available from the fastest ECL box with more processors per CPU, and if those were coupled together, 32 or more nodes could scale two orders of magnitude greater overall performance for a single workload . For example, in 2002 Charles Schwab operated a "MetroPlex" consisting of a redundant pair of its mainframe sysplexes at two locations in Phoenix, AZ, each with 32 nodes controlled by a shared CICS / DB / 2 workload were made to support the enormous volume of requests for dotcom-bubble web client requests.

This cheaper, much more scalable CMOS technology base and the huge investment costs for 64-bit addressing and independent production of cloned CF functions have one after another pushed the makers of IBM mainframe clones out of business.

CICS recovery / restart

The aim of restoring / restarting in CICS is to minimize damage to the online system and, if possible, eliminate it if an error occurs, so that the system and data integrity are maintained. If the CICS region has shut down instead of failing, it will perform a "warm" boot, using the checkpoint that was written at shutdown. The CICS region can also be forced to "cold start", which reloads all definitions and clears the log. The resources remain in the state in which they are.

Below are some of the resources that are considered recoverable under CICS. If these resources are to be restored, special options must be specified in the relevant CICS definitions:

  • VSAM files
  • CMT CICS managed data tables
  • Intrapartition TDQ
  • Temporary storage queue in auxiliary storage
  • I / O messages from / to transactions on a VTAM network
  • Other database / queue resources associated with CICS that support two-phase XA commit protocol (e.g. IMS / DB, Db2, VSAM / RLS)

CICS also provides extensive recovery / restart functions that allow users to set up their own recovery / restart functions in their CICS system. Commonly used restore / restart functions include:

  • Dynamic Transaction Backout (DTB)
  • Automatic restart of the transaction
  • Resource recovery using the system log
  • Resource recovery using the journal
  • System restart
  • Advanced recovery function


Each CICS region has a main task for which each transaction is performed, although certain services, such as accessing Db2 data, use different tasks (TCBs). Transactions are carried out cooperatively with multitasking within a region. They are expected to perform well and put a load on the CPU instead of waiting. CICS services handle this automatically.

Each unique CICS "" or transaction is assigned its own dynamic storage at startup, and subsequent requests for additional storage were handled by a call to the "Storage Control Program" (part of) the CICS -Core or "kernel"), analogous to an operating system.

A CICS system consists of the online core, batch support programs and application services


The original CICS nucleus consisted of a number of functional modules written in 370 assembler up to 373.

  • Task Control Program (KCP).
  • Storage Control Program (SCP).
  • Program Control Program (PCP).
  • Program interruption control program (PIP).
  • Interval Control Program (ICP).
  • Dump Control Program (DCP).
  • Terminal Control Program (TCP).
  • File Control Program (FCP)).
  • Transient Data Control Program (TDP).
  • Temporary Storage Control Program (TSP).

As of V3, the CICS core has been rewritten into a kernel and domain structure with the help of IBM PL / AS language - which is compiled in assembler.

The previous structure did not enforce separation of concerns and therefore had many cross-program dependencies that would lead to errors unless a thorough code analysis was performed. The new structure was more modular and therefore more resilient as it could be changed more easily without any impact. The first domains were often created with the name of the previous program, but without the "P" after it. Example: Program Control Domain (DFHPC) or Transient Data Domain (DFHTD). The kernel acted as a switcher for cross-domain requests - initially this proved to be expensive for frequently accessed domains (such as trace), but the use of PL / AS macros allowed these calls to be incorporated without compromising the separate domain design.

In later releases, completely redesigned domains such as the DFHLG logging domain and the DFHTM transactional domain were added to replace the Journal Control Program (JCP).

Support programs

In addition to its online capabilities, CICS has several support programs that run as batch jobs.

  • Preprocessor for high-level languages ​​(macro).
  • Command language translator.
  • Dump utility - prints formatted dumps generated by CICS Dump Management.
  • Trace utility - formats and prints the CICS trace output.
  • Journal formatting utility - prints a formatted dump of the CICS region in the event of an error.

Application services

The following components of CICS support application development.

  • Basic Mapping Support (BMS) offers device-independent terminal input and output.
  • APPC support, which provides LU6.1 and LU6.2 API support for collaboration between distributed applications that support two-phase commit.
  • The Data Interchange Program (DIP) provides support for the IBM 3770 and IBM 3790 programmable devices.
  • 2260 compatibility allows programs written for IBM 2260 display devices to run on 3270 displays.
  • EXEC interface program - the stub program that converts calls generated by commands into calls to CICS functions.
  • Integrated functions - table search, phonetic conversion, field checking, field editing, bit checking, input formatting, weighted retrieval.


Different countries have different pronunciations

  • Within IBM (especially Tivoli) this is the case referred to as / ˈkɪks /.
  • In the US, it is commonly done by reciting each letter / itsiːˌaɪˌsiːˈɛs /.
  • pronounced. It is pronounced / ˈkɪks / in Australia, Belgium, Canada, Hong Kong, the United Kingdom, and a few other countries.
  • In Finland it is pronounced [kiks]
  • In France it is pronounced [se.i.se.ɛs].
  • In Germany, Austria and Hungary it is pronounced [ˈtsɪks] and more rarely [ˈkɪks].
  • In Greece it is pronounced kiks.
  • In India it is pronounced is pronounced kicks.
  • In Iran it is downright kicks.
  • In Israel it is pronoun c ed CICS.
  • In Italy it becomes [ˈtʃiks].
  • pronounced. In Poland it becomes [ˈkʲiks].
  • pronounced. In Portugal and Brazil it is pronounced [ˈSiks].
  • In Russia it is pronounced as Kiks.
  • In Slovenia it is pronounced as Kiks.
  • In Spain it is called [ˈθiks].
  • In pronounced In Sweden kicks are pronounced.
  • Kicks are pronounced in Israel.
  • Kicks are pronounced in Uganda.
  • Kicks are pronounced in Turkey.

See also


External links