Trending December 2023 # Mappings In Informatica: Create, Components, Parameter, Variable # Suggested January 2024 # Top 13 Popular

You are reading the article Mappings In Informatica: Create, Components, Parameter, Variable updated in December 2023 on the website Moimoishop.com. We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested January 2024 Mappings In Informatica: Create, Components, Parameter, Variable

What is a Mapping?

Mapping is a collection of source and target objects linked together by a set of transformations. These transformations consist of a set of rules, which define the data flow and how the data is loaded into the targets.

A mapping consists of following set of objects

Source Definition – Source definition defines the structure and characteristic of the source, its underlying data types, type of the data source, etc.

Transformation – Transformation objects define how the source data is transformed, and various functions can be applied during the process.

Target Definition – Target definition defines the final target where the data will be loaded.

Links – Links connect the source definition to different transformations and target tables. It defines how the data flows from source to target and the transformations.

In this Tutorial- you will learn

Why do you need Mapping?

Mapping is an object in Informatica with the help of which you can define how the source data is modified before it reaches the destination or target object. Like if you have employee name as “Bill Clinton” in your source system and in the target system the requirement is to have employee name in the format as “Clinton Bill”, such operations can be designed at the mapping level. In basic terms, what you do with the source data is defined at the mapping level.

Mapping is the basic Informatica object with the help of which we can define the data transformation details and source/target object characteristics. Mappings help us to define the data transformation at the individual column levels for each row. Even in a single mapping you can handle multiple sources and targets.

Components of Mapping

Basic components of a mapping are

Source tables

Mapping parameters and variables

Target objects

Mapping transformations

There are various objects that constitute a mapping. A mapping can consist of sources, targets, mapping parameter and variables, mapplets, various transformations, and user-defined functions.

Mapping Source: Mapping sources are the objects from where you fetch the source data. It can be a database table, flat file, XML source or COBOL file source

Mapping target: Mapping target is our destination objects where final processed data gets loaded. Mapping target can be a relational table of a database, a flat file or XML file. Sources and targets are mandatory in any mapping, their type can differ

Mapping Parameters and Variables: Mapping parameters and variables helps you to create temporary variable objects which will help you to define and store temporary values while mapping data processing. Mapping parameters and variables are optional users defined data types, which can be created for a mapping and can be referenced and updated for a specific requirement. We will learn more about mapping parameters and variables in this section

Mapplets: They are objects which consist of a set of transformation, source or targets. Mapplets are generally created to reuse the existing functionality of a set of transformations. It can be used in any no of mappings.

What is Stage Mapping?

A stage mapping is a mapping in where we create the replica of the source table. For Example, in a production system if you have an “employee” table then you can create an identical table “employee_stage” in ETL schema.

In Stage Mappings,

Source and Target tables have identical structures

The data in the target table is a replica of source table data or

Data in stage (target) table is a subset of source data.

For example, if your source table contains employee details of deptno 10, 20, 30, and 40. The staging table can be a table having employee records of deptno 10 & 30 only.

The purpose of creating stage tables in Data warehouse is to make the process of data transformation efficient by fetching only those data which is relevant to us and also to minimize the dependency of ETL/Data Warehouse from the real-time operational system.

How to Create a Mapping

In this exercise, we will create a stage mapping, in which source will be “emp table” and the target will be “emptgt”.

Naming Convention – mapping names are prefixed with ‘m_’ followed by source and target table names separated by underscore sign.

Example – if you are loading emp_target table from the emp table, then mapping name can be ‘m_emp_emp_target’.

Step 1 – Launching Mapping Designer

Open Informatica Designer Tool

Step 2 – In Mapping Designer

Select Create Option

Step 3 – Enter Mapping name as ‘m_emp_emp_target’ and select OK Button.

Mapping will be created and listed under mappings folder.

A Mapping must have at least a source and a target, you will add sources and targets to the mapping.

Step 4 – In this step we will,

Select “emp” source table under sources folder.

Drag and drop “emp” table into mapping designer.

In mapping designer, imported source table will be shown.

Note – When you import any relational (database) table in a mapping, an additional object of source qualifier type will also be created. This source qualifier transformation is necessary and helps Informatica integration service to identify the source database table and its properties. Whenever you import a source table, source qualifier transformation will also be created. You should never delete a source qualifier object in a mapping.

Step 5 – In this step we will,

Select “emp_target” source table under Targets folder.

Drag and drop “emp_target” table into mapping designer

In mapping designer, “target table” will be imported and shown.

To manage the view space, you can iconize these objects in the mapping.

After selecting the option “Arrange all Iconic”, the workspace will look like this.

Step 7 – In Informatica, We design with the flow from left to right. So, source tables should be at the left side, and target tables should be at right. To arrange tables in our workspace, Select the “emp_target” table then drag and drop it to the right side of the emp table.

After this rearrangement, the workspace will look like this.

Note – Periodically use “ctrl+s” shortcut to save changes to the repository.

Step 8 – Now you have source and target tables in your mapping, but the mapping is not yet complete. The source and target tables should be linked to complete a mapping.

To Link source and targets

Step 10 – To link source with target table

Select Source table columns.

Drag and drop columns to the target table.

The Source and the Target tables will be linked, and connecting arrows will appear from source to the target table.

Step 11 – Use shortcut “ctrl+s” to save changes to your mapping. On the output window, you can see the message of mapping validation/parsing. It confirms that your mapping is valid. Also, there will be a temporary green tick mark next to the mapping name in mapping folder tree indicating mapping is done successfully.

In mappings there can be a requirement, where we need to pass variable to the mapping or there can be a scenario where we need to calculate temporary variables and further required to store them for the next session run of the mapping. For these purposes, we create mapping parameters and variables.

Mapping Parameters and Variables

Like every programming language, Informatica has its own way of defining parameters and variables. But unlike other programming languages, Informatica isn’t a code based language. To create parameters and variables in Informatica, you have to follow the predefined syntax and navigation.

Difference between parameters and variables –

Mapping Parameters Mapping Variables

Mapping parameters are those data types whose value once assigned remains constant throughout the mapping run. Like if you have created a mapping parameter deptno=20, then the value 20 will be constant for the whole mapping run. The parameter wherever it will be referenced will always return value 20 for that instance of mapping run. For a new mapping instance, the parameter value can be redefined. Mapping variables are objects which can be referenced throughout the mapping run (to access their values) and their values can be reassigned. For example, a mapping variable of total_salary can be used in a mapping, and its value can be updated based on salaries.

The mapping parameters and variables are specific to a mapping only. They cannot be referenced in another mapping.

How to Create Mapping Parameter

When you create a mapping parameter, during execution of mapping Integration service looks for its assigned value. This values can be assigned to following places.

Inside parameter file

In pre-session variable assignment

Initial value in repository

Default value assigned during variable creation

Step 1 – To Create mapping parameter – In mapping designer,

Select mappings menu

Select parameters and variables menu

Step 2– In next screen,

From drop down, select type as parameter

Enter parameter name as $$Deptno

Enter an initial value of 10

Select OK button

Now, you have created a mapping parameter deptno, with initially assigned value of 10, and this parameter can be referenced inside the mapping.

How to Create Mapping Variable

Step 1 – In mapping designer

Select mappings menu

Select parameters and variables menu

Step 2– On the next screen

From drop down, select type as variable

Enter variable name as $$TotalSalary

Select DataType as decimal

Enter an initial value of 0

Select OK button

This will create a mapping variable.

Note – mapping parameter and variable names always begin with $$.

Summary

Mappings are important in Informatica to match source data with target as per project requirements. We have discussed stage mappings and concept of Mapping Variables and Parameters.

You're reading Mappings In Informatica: Create, Components, Parameter, Variable

How To Create A Parameter In Report Builder

In this tutorial, you’ll learn how to create a parameter in Report Builder. Parameters allow end users to interact with a paginated report.

Parameters are similar to filters but they are functional only when you’re in the run view of Report Builder. Adding in this feature offers great assistance to end users as it allows them to filter data corresponding to their needs.

This tutorial will cover a comprehensive discussion on everything related to parameters where you’ll learn how to add and remove parameters. The tutorial will also show you how to remove blank or null values from your report, and handle errors in Report Builder.

At the top part of the Query Designer, you can see a label called Parameters with two boxes for each Dimension.

When you run it, you’ll see that you need to select a city or cities before viewing the report.

In the resulting report, you’ll notice that even with the city selection, it’s still returning blank values. To remove blank values from your parameter, go back to the design view and open the Report Parameter Properties window.

The next step is crucial. If you don’t do this, you’ll get an error.

Go back to the Query Designer. If you want to remove blank or null values, you need to set the Operator to Equal and remove any filter expression.

You’ll then see that all the blank and null values from the table have been removed.

This is an explanation as to why editing the Query Designer when removing blanks is important. Let’s look at a scenario where you skip going back to Query Designer and instead run the report after only unchecking the Allow blank value and Allow null value options.

If you do this, you’ll be faced with an error message.

The error is saying that the AllowBlank property of the City parameter is false. However, the default value contains a value that violates the AllowBlank property condition. This means that there’s a contradiction in the City parameter’s properties.

Remember that in the Parameter Properties, you’ve already set it to not allow blank or null values. However, in the Query Designer, the current expression already sets the City to not equal blank values. Therefore, there’s a redundancy in the formatting.

Moreover, because you’re using City as a parameter, adding a filter expression is no longer needed. Error messages in Report Builder are built intuitively as they specify what’s happening.

They allow you to fix the mistake first before continuing with your work. So it’s always a best practice to routinely Run your report.

This ensures that errors get detected early on. Instead of revising everything when you’ve almost finished, you can approach errors one at a time.

If you want to add another parameter in your report, open Query Designer. Then, drag the item from the measure group to the dimension tab.

Before you run the report, you first need to check the new parameter’s properties. Edit any properties if needed.

When you run the report, you need to set the two properties.

You can also continue adjusting the parameters as you view the report.

The parameter has now been deleted from the report. When you run the query, you’ll only be filtering by Year. If, for example, you select 2023, the report will then only show values with 2023 as the year.

To efficiently remove blanks and null values from your report, you can use a Boolean expression.

For this example, you need to create a Boolean expression so that you can keep the rows with non-blank values.

This will return True if a row value is blank, and False if otherwise.

Then, instead of Text, choose Boolean. For the operator, use the equal sign ( = ). In the Value textbox, write false.

So behind the scenes, this filter first evaluates if a value is blank (true) or not (false). Then, it filters out values that return true.

If you run your report, you’ll see that it doesn’t anymore contain blank or null values.

You can use this both at a data set and Tablix level.

This tutorial offers a comprehensive discussion on parameters in Report Builder. This feature is especially beneficial for end users. It allows them to see specific details in a paginated report.

And as you’ve learned, adding and removing parameters is easy. The trick is to make sure the parameter properties are set correctly so that blank or null values will be excluded.

But even if you make a mistake, you’ll be notified through an error message. Overall, Report Builder is an easy and user-friendly program to use.

Sue Bayes

Auto Logout In Linux Shell Using Tmout Shell Variable

Introduction

When using a Linux shell, it’s essential to ensure that user is logged out when they are not actively using system to ensure security and efficiency. This can be achieved by setting an automatic logout timer using TMOUT shell variable. In this article, we will explore how to set up auto logout in Linux shell using TMOUT shell variable, its benefits, and how to modify settings.

What is TMOUT Shell Variable?

TMOUT is an environment variable in Linux shell that defines number of seconds a shell session can be idle before it is automatically logged out. When this variable is set, shell will terminate session if there is no input activity for set time. This feature ensures that system is secure and that users do not waste system resources by staying logged in when they are not using system.

How to Set up Auto Logout in Linux Shell Using TMOUT Shell Variable −

Setting up TMOUT shell variable is a simple process that requires modifying user’s shell configuration file. In most Linux distributions, this file is either ~/.bashrc or ~/.bash_profile. To set up auto logout in Linux shell using TMOUT shell variable, follow these steps −

Open your shell configuration file using a text editor −

$ nano ~/.bashrc

Add following line to file to set TMOUT variable to desired value (in seconds) −

TMOUT=600

In this example, TMOUT variable is set to 600 seconds, which means that shell session will be automatically terminated after 10 minutes of inactivity.

Save changes to file and exit text editor.

To apply changes, source shell configuration file −

$ source ~/.bashrc

After completing these steps, TMOUT shell variable will be active, and shell will automatically log out user after specified period of inactivity.

Benefits of Auto Logout Using TMOUT Shell Variable

There are several benefits of using TMOUT shell variable to set up an automatic logout timer in Linux shell −

Enhanced Security

An idle shell session is a security risk as it can be hijacked by malicious users. By setting up TMOUT variable, you can ensure that your system remains secure by automatically logging out idle users.

Resource Optimization

When users remain logged in without actively using system, system resources are wasted. auto logout feature ensures that resources are optimized by freeing up system resources when users are not using system.

Increased Productivity

When users are automatically logged out after a period of inactivity, they are forced to re-enter their credentials to log back in. This process serves as a reminder that they are taking up system resources and encourages them to use system efficiently, leading to increased productivity.

Modifying TMOUT Shell Variable Settings

Once TMOUT shell variable is set up, it’s possible to modify its settings to meet your specific needs. To modify settings, follow these steps −

Open your shell configuration file using a text editor −

$ nano ~/.bashrc

Modify value of TMOUT variable to desired value (in seconds).

TMOUT=1200

In this example, TMOUT variable is set to 1200 seconds, which means that shell session will be automatically terminated after 20 minutes of inactivity.

Save changes to file and exit text editor.

To apply changes, source shell configuration file −

$ source ~/.bashrc

After completing these steps, new TMOUT value will be active, and shell will automatically log out user after new specified period of inactivity.

It’s important to note that modifying TMOUT shell variable settings will affect all users who use same shell configuration file. Therefore, it’s essential to communicate any changes to all users to avoid confusion and frustration.

Disabling TMOUT Shell Variable

If you no longer need TMOUT shell variable, you can disable it by removing it from your shell configuration file. To do this, follow these steps −

Open your shell configuration file using a text editor −

$ nano ~/.bashrc

Find line that sets TMOUT variable and remove it.

Save changes to file and exit text editor.

To apply changes, source shell configuration file −

$ source ~/.bashrc

After completing these steps, TMOUT shell variable will be disabled, and shell will no longer automatically log out users after a period of inactivity.

Another important consideration when using TMOUT shell variable is that it can potentially cause data loss or disruption for users who are working on a task that takes longer than set time limit. For example, if a user is in middle of editing a large file and session times out, they may lose unsaved changes.

To mitigate this risk, it’s a good idea to provide users with a way to disable or adjust TMOUT variable on a per-session basis. One way to do this is to provide a command that users can run to temporarily disable variable or set a longer time limit for their current session. For example, you could create an alias for command export TMOUT=0 that sets variable to zero and disables automatic logout.

It’s also important to communicate clearly with your users about automatic logout policy and reasons for implementing it. Make sure that users are aware of time limit and any potential risks associated with policy, and provide them with guidance on how to work within limits of policy.

In addition, you may want to consider logging user sessions to help track and monitor user activity. This can help you identify potential security issues or violations of your security policies, and provide you with a record of user activity that can be useful for auditing and analysis.

Overall, TMOUT shell variable is a valuable tool for enforcing automatic logout policies in Linux environments. However, it’s important to use variable judiciously and in combination with other security measures to ensure overall security and integrity of your system. By doing so, you can help protect your system from unauthorized access and mitigate risk of data loss or disruption for your users.

Conclusion

In conclusion, TMOUT shell variable is an essential feature that can enhance security, efficiency, and productivity of a Linux shell. By automatically logging out idle users, system remains secure, system resources are optimized, and productivity is increased. Setting up TMOUT shell variable is a simple process that requires modifying user’s shell configuration file. It’s also possible to modify settings and disable feature altogether. It’s crucial to communicate any changes made to TMOUT shell variable settings to all users to avoid confusion and frustration.

Workflow Monitor In Informatica: Task & Gantt Chart View Examples

In our previous tutorial, we discussed on workflow — which is nothing but a group of commands or instructions to the integration service. It defines how to run task like command task, session task, e-mail task, etc. To track everything is streamlined and executed in the desired order, we need a Workflow Monitor.

What is Workflow Monitor?

Workflow monitor is a tool with the help of which you can monitor the execution of workflows and task assigned to the workflow.

In workflow monitor you can,

See the details of execution

See the history of the workflow execution

Stop, abort or restart workflows, and tasks

Display the workflows those who are executed at least one time

In this Tutorial – you will learn

Workflow monitor consists of following windows –

Navigator window- shows the monitored repositories, folders & integration service

Output window – displays the messages from integration services and repository

Properties window – displays the details/properties about tasks and workflows

Time window – displays the progress of the running tasks & workflows with timing details.

Now, let see what we can do in Workflow Monitor

How to open Workflow Monitor

Step 2 – This will open workflow monitor window

In the workflow monitor tool, you will see the repositories and associated integration services on the left side. Under the status column, you will see whether you are connected or disconnected to integration service. If you are in the disconnected mode, you won’t see any running workflows. There is a time bar which helps us to determine how long it took a task to execute.

Step 3 – The workflow monitor is in a disconnected mode by default. To connect to integration service.

Select connect option

After connecting, the monitor will show the status as connected.

Views in Workflow Monitor

There are two types of views available in Informatica workflow monitor

Task view

Gantt View

Task View

Task view displays the workflow runs in report format, and it is organized by workflow runs. It provides a convenient approach to compare workflow runs and filter details of workflow runs.

Task view shows the following details

Workflow run list – Shows the list of workflow runs. It contains folder, workflow, worklet, and task names. It displays workflow runs in chronological order with the most recent run at the top. It displays folders and Integration Services alphabetically.

Status message – Message from the Integration Service regarding the status of the task or workflow.

Node – Node of the Integration Service executed the task.

Start time – The time at which task or workflow started.

Completion time – The time at which task or workflow completed the execution.

Status – Shows status of the task or workflow, whether the workflow started, succeeded, failed or aborted.

Gantt Chart View

In Gantt chart view, you can view chronological view of the workflow runs. Gantt chart displays the following information.

Task name – Name of the task in the workflow

Duration – The time taken to execute the task

Status – The most recent status of the task or workflow

To switch between Gantt chart and task views

Example- How to monitor and view details

In previous examples, we have created a

Mapping “m_emp_emp_target”: A mapping is a set of instructions on how to modify the data and processing of transformations that affects the record set.

Session “s_ m_emp_emp_target” : A session is a higher level object to a mapping which specifies the properties of execution. For example performance tuning options, connection details of sources/targets, etc.

Workflow “wkf_s_m_emp_emp_target”: A workflow is a container for the session and other objects, and it defines the timing of the execution of tasks and the dependency or flow of execution.

Now, we will analyze the details of execution in this topic.

Step 1 – Restart the workflow designer, as described in previous topic

Step 2 – Go to workflow monitor and in the monitor window you will see details as shown in screen shot like repository, workflow run details, node details, workflow run start time, workflow run completion time and status.

Step 3 – Here you can view the currently running workflow, which is having status as “running”.

Step 4 – Once the workflow execution completes, it status would change to succeeded/failed along with start and end time details.

Step 5 – To view the task details

In the pop-up window select “get run properties”

A properties window would appear with the task details

Here we chose “Task Details” to view. It will display all the details like Instance Name, Task Type, Start Time, Integration Service Name, etc.

Task details –

Source and Target Statistics

Source and target statistics gives the details of source and target. For example, how many rows are fetched from the source and how many rows are populated in the target the current throughput, etc

In the following screen, 14 records are fetched from the source, and all 14 are populated in the target table.

Applied rows signify how many records Informatica had tried to update or insert the target

Affected rows signify how many numbers of applied rows were actually chúng tôi all 14 rows are successfully loaded in the target, so the count is same for both.

Rejected rows signify how many rows are dropped due to target constraint or other issues.

In this tutorial, you have learned how to open and monitor the workflows and tasks using workflow monitor.

Fixed Cost Vs Variable Cost

Difference between Fixed Cost vs Variable Cost

The following article provides an outline for Fixed cost vs Variable cost. The major difference between these two costs is that the Variable depends on the output of production while the fixed cost is independent of the output.

Start Your Free Investment Banking Course

Download Corporate Valuation, Investment Banking, Accounting, CFA Calculator & others

What is Fixed Cost?

Fixed cost is defined as a cost that does not change its value with any change (Increase or Decrease) in the goods produced or services sold. Changes in activity levels do not affect fixed costs. It does not mean that the cost will remain fixed forever. It means it will be constant for a particular period of time. E.g., The interest amount charged is fixed for the period unless and until it has been renewed. Fixed cost and variable cost are the main two pillars in any industry’s production and service line. There are two types of fixed costs: Committed fixed cost and discretionary fixed cost. The fixed cost can be considered as a sunk cost.

What is Variable Cost?

=Rs 500) (5*200=Rs 1000) (5*300=Rs 1500).

Head to Head Comparison Between Fixed Cost vs Variable Cost (Infographics)

Below are the top 8 differences between Fixed cost and Variable Cost:

Key Differences between Fixed Cost vs Variable Cost

Examples of variable costs are Raw materials, labor, packaging, freight, and commission. As the volume increases, these costs will increase as one extra item to be produced requires more materials, labor, etc. Hence these costs are directly proportional to the volume of items produced.

Examples of fixed costs are rental payments, depreciation, insurance, interest payment, etc. These items do not change even if you increase the volume of production, e.g., even if you produce one extra item, the rental payment needs to do is the same So, Fixed cost.

Variable cost varies with the variation in the volume production. The fixed cost has no relation with the output capacity.

Fixed cost does not change with the volume and remains constant for a given period of time. e.g. Till the time new lease contract is not changed, the lease payment will remain fixed. Variable cost changes with the production volume.

Example of calculating the fixed cost: Supposes the total cost is Rs1000 and the total units produced are 10. Therefore, the fixed cost per unit is Rs1000/10 = Rs100. The variable cost of labor charges is 5Rs per unit of production. Therefore, for making 10 units, it would be 10*5=Rs50. The total cost of production is the sum of the total variable cost and total fixed cost.

Here the only taken variable is labor. We need to consider the variable cost for all the other items and add to the fixed cost to get the total cost as an outcome. fixed cost changes per unit. As the number of units increases, the fixed cost per unit decreases. Variable cost remains constant per unit. Variable cost is directly proportional to the change in production.

If production increases, i.e., if the number of units produced increases, the fixed cost per unit produced drops significantly, increasing the possibility of a greater profit margin and achieving economies of scale.

As mentioned above, the economies of scale production need to be increased to decrease the per-unit fixed cost. So, the risk associated with the fixed cost is higher than the variable cost.

Unless and until production takes place, variable cost does not take place, but fixed cost occurs even if there is no production. For e.g. Even if there is no laptop produced in the laptop factory but the rental charges need to be paid – that is the fixed cost. The labor charges are not paid as no production – that is the variable cost. The fixed cost cannot be controlled and has to be paid. The amount of the production level can control the variable cost.

Comparison Table between Fixed Cost vs Variable Cost

Let’s discuss the top comparison between Fixed Cost vs Variable Cost:

Basis of Comparison Fixed Cost Variable Cost

Definition The cost is fixed. Cost is variable.

Dependent Independent on the volume of production of a company. Dependent on the volume of production of a company.

Behavior Remains constant for a given time. Time-related. Changes with the output level. Volume-related.

Formula It is calculated as the total fixed cost divided by no of units produced. Formula to calculate the total variable cost is (variable cost of one item*no of items produced).

Economies of scale Greater the fixed cost company has more sales the company targets to reach the breakeven point. Variable cost remains flat in nature.

Risk associated It is riskier as the cost depends on the production level. Risk varies as the cost is dependent on the amount produced.

Occurred when These costs occur even if the quantity is produced or not. It cannot be controlled. These costs occur only when the production starts depending directly on the no of units produced. It could be controlled.

Examples Salary, tax, depreciation, insurance, etc. Cost of goods sold, administrative and general expenses on the Income statement.

Conclusion

Variable and fixed costs are completely contradictory to each other but serve a major role in financial analysis. Higher units of production increase profitability as the total fixed cost decreases, while variable cost helps in the contribution margin; therefore, both have unique importance in their ways.

Recommended Articles

This is a guide to Fixed cost vs Variable cost. Here we discussed the Fixed vs Variable cost key differences with infographics and a comparison table. You can also go through our other suggested articles to learn more –

Top 8 Awesome Components Of Azure

Introduction to Azure Components

Start Your Free Data Science Course

Hadoop, Data Science, Statistics & others

Components of Azure

The services offered by Azure are categorized in different components, below are the key components of Azure.

1. Compute

As the name suggests it offers services such as application development, hosting, deployment, etc. In Azure Few of commonly used compute options are:

Virtual Machines(VM’s): It is Iaas services which allow creating virtual machine according to the hardware requirement and operating system requirement. VM can be accessed publically or only allowed in VNet. VM’s can be created using Azure CLI, PowerShell and Azure portal.

VM Scale Set: This offers to configure and create thousands of VM’s with similar configuration within minutes.

Azure App Service: This service offers a host of web applications, Rest API, or mobile backend service. This service powers applications with azure features such as scalability, security, etc.

Azure Functions: This service offers a solution to run a small functionality by deploying a piece of code. This is used for code reusability i.e. deploy once and use multiple times. Azure functions work irrespective of application, to perform a particular task.

Azure Batch Service: Azure batch service are batch jobs run to perform large scale computation task. These jobs manage a pool of resources (i.e. virtual machines) and install the application required and run jobs on nodes. So, batch jobs are basically used as a platform to build application required large scale computation.

2. Storage

Blob storage: It can be used to store unstructured data such as pictures, raw data or semi-structured data such as CSV or XML files. Their files are stored in a directory like structure called a container.

Azure tables: As the name suggests stored data in the form of tables. These tables are NoSQL tables i.e. follow schema-less kind of structure. These tables can be created very easily and can be accessed in code with the help of the provided URL. It stored data in key, value-form in the backend.

Azure File Storage: Mainly used when the legacy system’s file server needs to be migrated. It stores data on a file share that can be mounted as a local directory on azure VM’s and can be accessed by an on-premise application using Rest-API.

Azure Queues: Name suggests used to queue-up messages, transfer them within the application. The process can interact with queues, pick-up messages, perform the required operation and probably save results either on storage or database.

3. Database

This component offers data management services which include SQL as well as NoSQL tools. SQL Server, Azure Database for MySQL, etc are supported as a relational database whereas databases like Cassandra can be used as NoSQL databases. Also, there is Cosmos DB (document DB) built for fast and enhanced performance.

4. Security And Authentication

This component is responsible for all the security issues like identifying and responding to security threats, managing user access, authentication, authorization, encryption of keys, etc. Azure Active Directory(AAD), Azure Key Vault, etc are commonly used services used.

Azure Active Directory(AAD): This is a cloud-based access management service.

Azure Key Vault: This is a hosted cloud management service used to encrypt and securely store keys, passwords, connection strings, certificates or any other secrets.

5. Networking 6. Monitoring

Azure monitoring services help applications to enhance their performance by collecting and analyzing logs from either cloud or on-premise applications. It is used to identify an improvement scope in performance while looking at the stats generated by the Azure Monitoring service. All the data collected from applications are stored into two types Metrics and Logs.

Metrics: These are numbers used to analyze the application and can describe the application’s performance at any point in time based on current stats.

Logs: Logs are basically records containing events that happened and data generated during the events. So based on both events and applications behavior application performance can be analyzed.

7. Web Services

Web Application service used to deploy web applications developed on the local machines. These applications can be developed in java, .net. PHP, NodeJS etc. It offers features such as scalability, high availability etch. Also, it supports both Windows and Linux systems. User needs to focus on the development part, execution and maintenance are taken care of by Azure itself. By default, web services are public and can be accessed across the azure.

8. Mobile Services

This component offers backend service to applications running on phones. This is suitable for the application having a large number of daily hits o storing a large amount of data. It provides service called notification hubs that can be utilized to send notifications to phones. Notification dash can connect to a various notification service provider like apple, google, windows. SDKs can be used to connect to the notification hub. Also, it can be used to send a notification to a user, group of users or mass notification. In this way, it makes developer life easier.

Recommended Articles

This is a guide to Azure Components. Here we discuss the basic concept and best 8 key Azure Components with detail explanation. You can also go through our other suggested articles to learn more –

Update the detailed information about Mappings In Informatica: Create, Components, Parameter, Variable on the Moimoishop.com website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!