Trending December 2023 # Dynamic Bus Fare Pricing Comparison And Detection # Suggested January 2024 # Top 15 Popular

You are reading the article Dynamic Bus Fare Pricing Comparison And Detection updated in December 2023 on the website We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested January 2024 Dynamic Bus Fare Pricing Comparison And Detection

This article was published as a part of the Data Science Blogathon.


Online bus ticketing ographies and proving value-added services such as insurance, various amenities, etc. Once a technologically backward and resource-intensive public transport industry is now transformed into a travel industry behemoth within a decade. The major players in the industry are Redbus, Makemytrip, Goibibo, EaseMyTrip all fighting to capture market share and assert dominance.

Though plenty of levers is available to capture market share, pricing still remains the most important in India. Pricing can make or break bus operators. In an industry that already has high operating costs and low margins, getting the price right is the most fundamental decision that every operator has to take, either a large or a smaller one. One Bain study found that 78% of B2C firms believe they can still improve their pricing strategies.

What pricing strategies can be used?

Zonal Pricing – Simple and direct pricing based on zones. Used by government public transport.

Distance Pricing – Pricing based on distance travelled, used majorly by buses on hire, tourist operators.

Origin destination Pricing – Based on the destination, if a major tourist destination, then higher prices.

Seasonal Pricing – Based on the season.

Last-minute Pricing – Some operators drastically reduce or increase prices to increase volumes.

Dynamic Pricing – Most common in eCommerce where marketplaces have higher price setting flexibility, sparse adoption in the bus service industry.

This article explores the world of online bus tickets pricing. We will cover the following:

Problem statement

Explore dataset

Data preprocessing

Exploratory data analysis

Exploring feasible solutions

Test for accuracy and evaluation

1. Problem Statement:

Large bus operators have higher pricing power as they are already well placed in the market. The problem statement is to identify operators who are pricing independently(ideally large operators) and operators who are followers. Identifying market leaders through a data-based approach helps businesses serve these players better. As resources are scarce, the question is on which operator should the company utilize its resources better? Other use cases of this analysis can be used for internal strategic decision making and can have a positive impact on the long term relationships between operators and online ticketing platforms.

Problem Statement – Identify price-setters and followers. 

2. Explore Dataset:

Data can be downloaded from here.

display(bus_fare_df.head()) display(bus_fare_df.shape) display(bus_fare_df.describe()) display(bus_fare_df.dtypes) Screenshot: AuthorThe dataset has  5 columns :

Seat Fare Type 1 – Type 1 seat fare.

Seat type 1 has a different set of prices.

Need to clean it and identify a single price, such that this analysis can be easy and less complicated.

Seat Fare Type 2 – Type 2 seat fare.

Similar to Seat Fare Type 1.

Bus – Bus unique identifier.

Service Date – Journey date.

Convert to pandas timestamp and get day of the week, month, year etc information.

RecordedAt – Pricing recorded date, price captures by the system.

Similar to service date.

Modern ticketing platforms enrich customer experience through robust UI. On the left-hand side, there are the filters such as bus types, amenities, the top has information about the bus timings, prices and the bottom space provides seat selected capability.

Operators and buses terminology is used interchangeably.

Platform refers to online ticketing platforms such as Redbus, Makemytrip.

3. Data Preprocessing: 

Functions to c

def clean_seat(x): ''' input is a string object and not a list ''' # a = [float(sing_price) for price in x for sing_price in price.split(",")] # a = [sing_price for price in x for sing_price in price.split(",")] # return sum(a)/len(a) a = [float(price) for price in x.split(",")] return sum(a)/len(a) def average_s1_s2_price(s1, s2): ''' pandas series as input for price 1 and price 2 all 4 combination covered. ''' price_output = [] # for i in range(len(s1)): if (s1 == 0) & (s2 == 0): return 0 elif (s1 == 0) & (s2 !=0 ): return s2 elif (s1 != 1) & (s2 ==0 ): return s1 else : return (s1+s2)/2 # return price_output

Calculate the average fare per bus, having one data point is easier than multiple.

Backfill prices so that missing values can be replaced.

Convert seat fare type 1 and seat fare type 2 to string and replace null

bus_fare_df = bus_fare_df.sort_values(by = ["Bus","Service_Date","RecordedAt" ]) # display(bus_fare_df.head()) test = bus_fare_df[["Bus","Service_Date","RecordedAt","Seat_Fare_Type_1_average" ]].sort_values(by = ["Bus","Service_Date","RecordedAt" ]) test = test[["Bus","Service_Date","Seat_Fare_Type_1_average" ]] test["Seat_Fare_Type_1_average_impute"] = test.groupby(["Bus","Service_Date" ]).transform(lambda x: x.replace(to_replace=0, method='ffill')) display(test.shape) display(bus_fare_df.shape) test2 = bus_fare_df[["Bus","Service_Date","RecordedAt","Seat_Fare_Type_2_average" ]].sort_values(by = ["Bus","Service_Date","RecordedAt" ]) test2 = test2[["Bus","Service_Date","Seat_Fare_Type_2_average" ]] test2["Seat_Fare_Type_2_average_impute"] = test2.groupby(["Bus","Service_Date" ]).transform(lambda x: x.replace(to_replace=0, method='ffill')) display(test2.shape) # display(bus_fare_df.shape) bus_fare_df["Seat_Fare_Type_1_average_impute_ffil"] = test["Seat_Fare_Type_1_average_impute"] bus_fare_df["Seat_Fare_Type_2_average_impute_ffil"] = test2["Seat_Fare_Type_2_average_impute"] # display(bus_fare_df.head()) ############################################################################################################################# test = bus_fare_df[["Bus","Service_Date","RecordedAt","Seat_Fare_Type_1_average" ]].sort_values(by = ["Bus","Service_Date","RecordedAt" ]) test = test[["Bus","Service_Date","Seat_Fare_Type_1_average" ]] test["Seat_Fare_Type_1_average_impute_bfil"] = test.groupby(["Bus","Service_Date" ]).transform(lambda x: x.replace(to_replace=0, method='bfill')) display(test.shape) test2 = bus_fare_df[["Bus","Service_Date","RecordedAt","Seat_Fare_Type_2_average" ]].sort_values(by = ["Bus","Service_Date","RecordedAt" ]) test2 = test2[["Bus","Service_Date","Seat_Fare_Type_2_average" ]] test2["Seat_Fare_Type_2_average_impute_bfil"] = test2.groupby(["Bus","Service_Date" ]).transform(lambda x: x.replace(to_replace=0, method='bfill')) display(test2.shape) bus_fare_df["Seat_Fare_Type_1_average_impute_bfill"] = test["Seat_Fare_Type_1_average_impute_bfil"] bus_fare_df["Seat_Fare_Type_2_average_impute_bfill"] = test2["Seat_Fare_Type_2_average_impute_bfil"] # display(bus_fare_df.hea ############################################################################################################################# test_a = bus_fare_df[["Bus","Service_Date","RecordedAt","average_price_s1_s2" ]].sort_values(by = ["Bus","Service_Date","RecordedAt" ]) test_a = test_a[["Bus","Service_Date","average_price_s1_s2" ]] test_a["average_price_s1_s2_bfil"] = test_a.groupby(["Bus","Service_Date" ]).transform(lambda x: x.replace(to_replace=0, method='bfill')) display(test_a.shape) test_f = bus_fare_df[["Bus","Service_Date","RecordedAt","average_price_s1_s2" ]].sort_values(by = ["Bus","Service_Date","RecordedAt" ]) test_f = test_f[["Bus","Service_Date","average_price_s1_s2" ]] test_f["average_price_s1_s2_ffil"] = test_f.groupby(["Bus","Service_Date" ]).transform(lambda x: x.replace(to_replace=0, method='ffill')) display(test_f.shape) bus_fare_df["average_price_s1_s2_bfill"] = test_a["average_price_s1_s2_bfil"] bus_fare_df["average_price_s1_s2_ffill"] = test_f["average_price_s1_s2_ffil"] # display(bus_fare_df.head()) ############################################################################################################################# bus_fare_df['average_price_s1_s2_filled'] = bus_fare_df.apply(lambda x: average_s1_s2_price(x.average_price_s1_s2_ffill, x.average_price_s1_s2_bfill), axis=1)

Create flags for buses with only Rs 0 as price point, these could be cancelled buses.

average_price_s1_s2_filled is the final average price point.

The level of data/detail(LOD) is Bus x Service_Date x RecordedAt

The preprocessing step creates all the cleaned columns and features.

The code can be downloaded from here as well.

Screenshot: Author

4. Exploratory Data Analysis

EDA is all about asking as many relevant questions as possible and getting answers through data. EDA on its own might not help solve the business problem but will provide valuable explanations as to why something things are the way they are. It also helps identify important features in a dataset and filter out discrepancies.

As only pricing data is available, certain assumptions help narrow down the analysis. :

All operators are charged a similar commission.

All operators have similar ratings/amenities.

All operators run on the same routes, covering similar distances.

All operators have similar boarding and destination points.

All operators have a similar couponing/discounting policy.

Null hypothesis – All operators price independently and there are no price-setters or followers.

Alternate hypothesis – Not all operators price independently and there are price setters and followers.

EDA problem statements and probable business intelligence gained by answering them :

Percentage change between the initial price and final price?

Do operators increase or decrease price over time – this information can be leveraged to help bus operators identify opportunities to keep prices constant instead of reducing.

It also provides initial impetus and validates our problem statement.

Days between price listing and journey date:

Identify the perfect window for listing. Is 10 days before the journey date too short or is it 30 days too long? Does the early bird always get the worm? The optimized window saves cost, If all seats are not filled within the time then it’s a loss, if all seats are filled before time, then an opportunity is lost, which turns out to be an opportunity cost.

Daily average prices across platform vs bus operators:

Identify market price expectations, and improve pricing decisions.

Price bucketing of buses:

Descriptive statistics.

Identify competition and deploy tactics to mitigate losses due to competitive pricing.

Day of the week analysis on price:

Weekend weekday analysis, to improve pricing decisions.

Price elasticity across the platform:

Based on prices, identify the different seats available:

Try to gauge price sensitivity on the type of seating.

Percentage change between the initial price and final price?

For 60% of Bus X Service_Date combinations change in the initial and final price is 0.

The remaining 40% buses majorly have seen a decrease in final prices.

The box and whisker plot shows on average 10% decrease in final price.

Operators list higher prices eventually to reduce them over time.

MOM(month on month) boxplot of the same would shed some light on how seasonality can affect price change.

Screenshot: Author


one_df_002 = pd.merge(one_df,final_cleaned_df[["Bus","Service_Date","RecordedAt", "average_price_s1_s2_filled"]], how = "left" , left_on = ["Bus","Service_Date","min"], right_on =["Bus","Service_Date","RecordedAt"], suffixes=('_left', '_right')) one_df_003 = pd.merge(one_df_002,final_cleaned_df[["Bus","Service_Date","RecordedAt", "average_price_s1_s2_filled"]], how = "left" , left_on = ["Bus","Service_Date","max"], right_on =["Bus","Service_Date","RecordedAt"], suffixes=('_left', '_right')) one_df_003["price_diff_i_f"] = one_df_003.average_price_s1_s2_filled_right - one_df_003.average_price_s1_s2_filled_left one_df_003["price_diff_i_f_perc"] = one_df_003.price_diff_i_f / one_df_003.average_price_s1_s2_filled_left one_df_004 = one_df_003[["Bus","Service_Date", "price_diff_i_f"]].drop_duplicates() one_df_005 = one_df_003[["Bus","Service_Date", "price_diff_i_f_perc"]].drop_duplicates() one_df_005.boxplot(column = ["price_diff_i_f_perc"]) one_df_004.price_diff_i_f.hist(bins = 50)

Days between price listing and journey date

75% list on or before 30 days.

Final 25% percentile list between 30 and 60 days.

It would be interesting to see if buses listed for longer periods operate at higher volumes as compared to average.

Screenshot: Author


groups = final_cleaned_df.groupby(["Bus","Service_Date_new"]).RecordedAt_new min_val = groups.transform(min) one_df = final_cleaned_df[(final_cleaned_df.RecordedAt_new==min_val) ] one_df["date_diff"] = one_df.Service_Date_new - one_df.RecordedAt_new figure(figsize=(10, 6), dpi=80) plt.subplot(1, 2, 1) plt.title("Date Difference Boxplot") plt.boxplot(one_df.date_diff.astype('timedelta64[D]')) plt.subplot(1, 2, 2) plt.title("Date Difference Histogram") plt.hist(one_df.date_diff.astype('timedelta64[D]'))

Daily average prices across platform vs bus operators

The flat orange price line till 20230715 is backfilled, hence appears to be constant.

This provides a view of the platform average prices vs a single bus operator.

Near the journey period, there is a shift of the peak to the right.

Screenshot: Author 


plot_1 = bus_fare_df[bus_fare_df["average_price_s1_s2_filled"] !=0].groupby(["RecordedAt_date_only"]).agg({"average_price_s1_s2_filled":np.mean}) figure(figsize=(10, 6), dpi=80) plt.plot(plot_1.index, plot_1.average_price_s1_s2_filled,label = "Platform") plot_2 = bus_fare_df[(bus_fare_df["average_price_s1_s2_filled"] !=0)&(bus_fare_df["Bus"] =="060c6d5595f3d7cf53838b0b8c66673d")].groupby(["RecordedAt_date_only"]).agg({"average_price_s1_s2_filled":np.mean}) plt.plot(plot_2.index, plot_2.average_price_s1_s2_filled, label = "060c6d5595f3d7cf53838b0b8c66673d")

Price bucketing of buses

Using price buckets, the different types of seats available can be identified.

For operators, helps understand competition, and provide more information on how to position themselves in the market.

Understanding the market helps set internal pricing business rules and guardrails.

The lowest price is 350, the highest is 1449.

The average is 711 and the 50 percentile is 710. Proof that the Central Limit Theorem holds good, the peak of the histogram is near about 700- 750 and that’s where the average is.

A lot of  0’s are present in the histogram, this is due to backfilling. In the boxplot, these 0’s are removed to provide a clear picture of prices.

Bucketing can be based on bins or percentile.

Preliminary bucketing can be done based on the boxplot:

Bucket 1 – 350-610  – Regular Seaters

Bucket 2 – 610-710  – Sleeper Upper Double / Seater

Bucket 3 – 710-800 –  Sleeper Upper Single / Sleeper lower Double

Bucket 4 – 710-1150 – Sleeper Lower / AC Volvo seater

Bucket 5 – 1150-1449 – AC Sleeper

Screenshot: Author


figure(figsize=(10, 6), dpi=80) price_f = final_cleaned_df[final_cleaned_df["average_price_s1_s2_filled"] != 0] plt.subplot(1, 2, 1) plt.title("Boxplot - Average Price") price_f = final_cleaned_df[final_cleaned_df["average_price_s1_s2_filled"] != 0] plt.boxplot(price_f.average_price_s1_s2_filled) plt.subplot(1, 2, 2) plt.title("Histogram - Average Price") plt.hist(final_cleaned_df.average_price_s1_s2_filled, bins = 100)

Price elasticity across the platform

The average price of the platform has been changing drastically from time to time and this can only mean that demand is varying as well. But due to the unavailability of demand information, the price elasticity of demand cannot be calculated.

This plot provides a proxy for the GMV change(gross merchandising value) of the platform.



Readers are encouraged to think about other hypotheses in support or against the problem statement which in turn will help in finding feasible solutions.  

5. Exploring Feasible Solutions

The problem statement states that identify independent price setters, and followers, the assumption being, the follower is following just one operator. We could be tempted to think, the follower could be checking multiple operators before setting the final price, just like users compare prices on Redbus, Makemystrip, Easemytrip before purchasing tickets.  This temptation is called conjunction fallacy! The probability of following one operator is higher than the probability of following two operators. Hence it’s safe to assume that comparing 1:1 operators pricing data and assigning setter and follower status is both heuristically as well as a statistically viable solution and the same has been followed in this solution.

How has the EDA helped with solutions?

On average 10% change in the initial and final price is strong evidence that operators are leveraging pricing to increase volumes.

The long-tailed wide bell curve for prices, hence the need to compare between similar buckets or similar price groups.

The majority of individual bus prices move along with the market prices. If this dint holds good, then it’s evidence of data anomaly or coding error.

1:1 operator comparison to identify follower and setter, rather than 1:2 or otherwise, to avoid conjunction fallacy.

Follower Detection Algorithm V001:

Randomly select 2 bus id, B1 and B2 with P1 and P2 having sufficient data points.

Join the two datasets and keep relevant columns only.

The basic hypothesis is, the price change ratio(p’) will converge at some timestamp, meaning the prices have intersected, and there is a setter-follower behaviour.

Calculate the difference in price (~P )of wrt to P1 as well as P2. (P2-P1)/P1 and (P2-P1)/P2, assuming  B2 is the follower.

Get the average price change by calculating the harmonic mean of the ~p. (Different denominators P1 and P2, hence harmonic mean )

HM_Score = (max(hm) – min(hm))/ min(hm). If score is low, then less confidence in setter-follower relationship.

 The second aspect of the solution is to identify the coefficient(beta) of the equation ~P2 = C + beta*~P1. Ideally, the scatter plot will be at a slope of 45 degrees or have a positive correlation. And p-value of 0.8 to 1 is considered(experimental) good confidence.  And the intercept needs to be close to 0, meaning little or no difference in price, which could be convergence.

A combined score/metrics of p-value and hm_score can help identify a follower and a setter.

This is an initial, crude solution and can be further refined to improve its accuracy.

f = final_cleaned_df.copy() b1 = f[(f["Bus"] == "a6951a59b64579edcf822ab9ea4c0c83") & (f["Service_Date"] == "15-07-2023 00:00")] b2 = f[(f["Bus"] == "ab479dab4a9e6bc3eaefe77a09f027ed") & (f["Service_Date"] == "15-07-2023 00:00")] recorded_dates_df = pd.concat([b1[["RecordedAt_new"]], b2[["RecordedAt_new"]]], axis = 0).drop_duplicates().sort_values(by = "RecordedAt_new").reset_index().drop(columns = "index") joined_1 = pd.merge(recorded_dates_df, b1, on=["RecordedAt_new"], how='left',suffixes=('_actuals', '_B1')) joined_df = pd.merge(joined_1, b2, on=["RecordedAt_new"], how='left',suffixes=('_B1', '_B2')) joined_df cols_to_keep = ["RecordedAt_new", "Service_Date_B1","Bus_B1","Bus_B2", "average_price_s1_s2_filled_B1", "average_price_s1_s2_filled_B2"] model_df = joined_df[cols_to_keep] model_df_2 = model_df.drop_duplicates() ## replace null of service date model_df_2['Service_Date_B1'] = model_df_2['Service_Date_B1'].fillna(model_df_2['Service_Date_B1'].value_counts().idxmax()) model_df_2['Bus_B1'] = model_df_2['Bus_B1'].fillna(model_df_2['Bus_B1'].value_counts().idxmax()) model_df_2['Bus_B1'] = model_df_2['Bus_B1'].fillna(model_df_2['Bus_B1'].value_counts().idxmax()) model_df_2.fillna(0, inplace = True) test_a = model_df_2.sort_values(by = ["RecordedAt_new" ]) test_a = test_a[["Service_Date_B1","average_price_s1_s2_filled_B1" ]] test_a["average_price_B1_new"] = test_a.groupby(["Service_Date_B1" ]).transform(lambda x: x.replace(to_replace=0, method='bfill')) test_f = model_df_2.sort_values(by = ["RecordedAt_new" ]) test_f = test_f[["Service_Date_B1","average_price_s1_s2_filled_B2" ]] test_f["average_price_B2_new"] = test_f.groupby(["Service_Date_B1" ]).transform(lambda x: x.replace(to_replace=0, method='bfill')) model_df_2["average_price_B1_new"] = test_a["average_price_B1_new"] model_df_2["average_price_B2_new"] = test_f["average_price_B2_new"] model_df_3 = model_df_2[model_df_2["average_price_B1_new"] != 0][["average_price_B1_new","average_price_B2_new"] ] from scipy.stats import hmean ## get the price change wrt to each bus price model_df_2["price_cng_b1"] = abs(model_df_2.average_price_B1_new - model_df_2.average_price_B2_new)/model_df_2.average_price_B1_new model_df_2["price_cng_b2"] = abs(model_df_2.average_price_B1_new - model_df_2.average_price_B2_new)/model_df_2.average_price_B2_new model_df_2["harm_mean_price_cng"] = scipy.stats.hmean(model_df_2.iloc[:,8:10],axis=1) model_df_2 = model_df_2[model_df_2["average_price_B1_new"] != 0] model_df_2 = model_df_2[model_df_2["average_price_B2_new"] != 0] model_df_2x = model_df_2.copy() hm = scipy.stats.hmean(model_df_2x.iloc[:,8:10],axis=1) display((max(hm) - min(hm))/ min(hm)) print("======================================================================================================") model_df_3 = model_df_2[model_df_2["average_price_B1_new"] != 0][["price_cng_b1","price_cng_b2"] ] model_df_3.plot(); # Create linear regression object regr = linear_model.LinearRegression() # Train the model using the training sets # (X,Y)["price_cng_b1"]).reshape(-1,1),np.array(model_df_2["price_cng_b2"]).reshape(-1,1)) # The coefficients print("Coefficients: n", regr.coef_)

6. Test For Accuracy and Evaluation

Manual evaluation B1 – 9656d6395d1f3a4497d0871a03366078 and B2 – 642c372f4c10a6c0039912b557aa8a22 and service date – 15-07-2023 00:00

Actual price data from 14 days time period. We see that B1 is a follower but with low confidence.


The harmonic mean score((max(hm)-min(hm) )/ min(hm)) is high showing that the difference on average is about 8, meaning there is a significant difference in prices at some point in time.


The co-efficient shows that (P2-P1)/P1 and (P2-P1)/P2 is linear, also a good R2 of above 75% would mean a healthy model as well. Intercept is close to 0 also supports our assumption.


Final confidence can be hm_score* coefficient = 8*0.8 = 6.4. This is absolute confidence. Relative confidence by normalizing across all combinations will provide a score between 0 and 1, which can be gauged more easily.

We can reject the null hypothesis and assume the alternate to be true.

While this isn’t an optimal solution, it’s a good reference point, to begin with. Smoothening the rough edges of the algorithm and refining it for accuracy will lead to better results.

Useful Resources and References

Data science can be broadly divided into business solution-centric and technology-centric divisions. The below resources will immensely help a business solution-centric data science enthusiast expand their knowledge.

Fixing cancellations in-cab market.

Don’t fall for the conjunction fallacy!

HBR – Managing Price, Gaining Profit

Pricing: Distributors’ most powerful value-creation lever

Ticket Sales Prediction and Dynamic Pricing Strategies in Public Transport

Pricing Strategies for Public Transport

The Pricing Is Right: Lessons from Top-Performing Consumer Companies

The untapped potential of a pricing strategy and how to get started

Kaggle notebook with the solution can be found on my Kaggle account.

End Notes

This article presents a preliminary, unitary method, to figure out fare setters and followers. The accuracy of preliminary methods tends to be questionable but sets a precedent for upcoming more resilient and curated methods. This methodology can be improved as well with more data points and features such as destination, boarding location, YOY comparisons, fleet size, passenger information etc. Moreover, anomaly detection modelling might provide more satisfactory results as well.

The dataset can also be used for price forecasting using ARIMA or deep learning methods such as LSTM etc.

Good luck! Here’s my Linkedin profile in case you want to connect with me or want to help improve the article. Feel free to ping me on Topmate/Mentro as well, you can drop me a message with your query. I’ll be happy to be connected. Check out my other articles on data science and analytics here.

The media shown in this article is not owned by Analytics Vidhya and are used at the Author’s discretion. 


You're reading Dynamic Bus Fare Pricing Comparison And Detection

Difference Between Static And Dynamic Testing

As we know that testing is the most important stage in the process of delivery of any application or software. Testing not only validates the quality of an application but also provides an opportunity to the developer to improve its product.

Every application is being developed in some high or low level language which means some code has been written for its development so on the basis of execution of code written for the application there is classification of testing namely static testing and dynamic testing.

In this article, we will discuss all the important differences between static testing and dynamic testing. Let’s start with some basics of static testing and dynamic testing.

What is Static Testing?

Static testing is the testing in which code written for application is not executed during testing phase and only review of code is performed and basis on which defects and code quality has been examined. As its name implies, static testing performs the static verification of the code. It targets the assessment of program code and documentation.

Static testing is generally performed before the compilation of the code. Static testing involves two types of testing techniques namely review and static analysis.

What is Dynamic Testing?

Dynamic testing there is execution of code written for the application and then defects and application behavior has been examined. Dynamic testing is performed to examine the behavior of the software based on dynamic inputs. The dynamic testing of a software product is performed after compilation of the software code.

Dynamic testing is classified into two types namely – white box testing and black testing. In software testing techniques, the dynamic testing is one of the essential tools for detecting any security threats. The dynamic testing increases quality of the product.

Difference between Static and Dynamic Testing

The following table highlights the major differences between Static Testing and Dynamic Testing −

Parameter Static Testing Dynamic Testing

Definition Static testing is the testing in which code written for application is not executed during testing phase and only review of code is performed and basis on which defects and code quality has been examined. Dynamic testing there is execution of code written for the application and then defects and application behavior has been examined.

Nature of testing As name states static testing does the static verification process in which the requirement and corresponding written code has been verified. Dynamic testing does the validation process which examines the expected behavior of the application based on dynamic inputs provided to the application.

Testing target Static testing targets to the assessment of code and documentation. Dynamic testing targets the runtime bugs/bottlenecks in the software system.

Prerequisite For static testing, a checklist of application process and documentation is required. For dynamic testing, test cases for execution are to be developed.

Stage of testing Static testing generally get performed before compilation of code Dynamic testing mostly performed after compilation of code.

Cost to Company In Static testing, the cost of finding the defects and fixing them is less. Also, the Return on Investment is high because static testing is carried out at an early stage of development. In case of Dynamic testing, the cost of finding and fixing the defects is high. Also the Return on Investment (RoI) is low because this process is carried out after the development phase.


The most important difference between static and dynamic testing is that the static testing checks the defects in software without actual execution of the software code and it analyzes the static behavior of the software, while dynamic testing is used to analyze the dynamic behavior of the software.

Zoho Crm Review And Pricing In 2023


Those looking for a complete customer experience solution could benefit from Zoho CRM Plus, which combines marketing, CRM, help desk and customer service tools in one application. When billed annually, the pricing ranges from $14 to $52 per user per month.


Zoho’s customer support team promises a response time as short as one hour, depending on the service plan.

Zoho CRM Features

How do you choose the right CRM, and what should you look for in such a system? The Zoho CRM has appealing features for businesses prioritizing mobile usage in addition to typical customer relationship management tools.

Mobile App

Some CRM mobile apps offer only basic features, like record and note access. In contrast, Zoho CRM’s smartphone apps include important functions such as analytics dashboards, team collaboration tools, map integration, voice notes and call logging to extend the CRM’s usability to remote and mobile workers. Zoho’s AI-powered assistant, Zia, helps sales professionals complete routine or recurring tasks such as updating records, adding notes and managing the sales pipeline. Zia responds to simple text-based or natural-language commands. The phrases you use can be configured as well. 


If you want to go beyond navigating to your next meeting and build sales routes with optimized navigation and visualized maps, connect Zoho CRM to Zoho RouteIQ for an extra fee.


If you want to add a speck of gamification to the healthy competition between your sales reps, you can use Zoho’s Motivator to visualize key performance indicators (KPIs) and even offer trophies to top performers.


Our review of Zoho Books and our Zoho Assist review explain what else this provider has to offer small businesses.

Zoho’s AI flags potential anomalies in customer communication.

Source: chúng tôi and Onboarding

Zoho CRM’s extensive documentation and user guides make it a breeze to set up your new system. There are only a few deployment steps required before the CRM is ready to use:

Enter the time zone, language and basic company information.

Configure the security settings.

Set user roles and permissions.

Customize the modules, record details and page layout.

Automate workflows.

Import existing data.

Most of these steps can be completed in just a few days, but customization options may require additional tweaking as needs become more clearly defined.

Zoho CRM supports dozens of add-ons and integrations, including options for highly rated business phone systems, productivity tools and external cloud storage. This functionality is included with all paid service plans, but some integrations require paid licenses. Fortunately, they can be set up anytime after the CRM is deployed, giving you time to assess your company’s overall needs.

You can facilitate Zoho CRM’s implementation with blueprints.

Source: chúng tôi Service

Zoho CRM offers four support levels: Basic, Classic, Premium and Enterprise. While Basic and Classic are available for free, the others come at an additional cost of 20% and 25% of the license fee, respectively, but also include perks like remote assistance, product onboarding and reduced response time.   

In our customer service testing with the company, we initially found it difficult to get a clear understanding of the features included in each plan. When we reached out a second time, we had a better experience. The representative was very patient and knowledgeable and even set up a time to discuss the platform further.       

One support area where Zoho shines is digital documentation. You can find a wealth of instructions and documentation online, as well as detailed tutorials, training videos, and live and on-demand webinars.


Upper-level service plans support 28 languages and multiple global currencies for businesses with international workers and customers.

Jetpack Ai Assistant: Pricing, Features And Use Cases

Create compelling and professional content within WordPress with this powerful AI assistant.

About Jetpack AI Assistant

JetPack AI Assistant is an AI tool that creates engaging content within the WordPress Editor. It allows users to write blog posts, edit content, and adjust the tonality of the posts using AI. The tool can also suggest titles, generate summaries, and translate text into various languages.

JetPack AI Assistant has an intuitive interface with powerful AI capabilities to help users produce high-quality content faster. It can generate various types of content, including tables, blog posts, structured lists, or detailed pages. The tool is integrated into WordPress. So you can start using it immediately after creating your free account.

Jetpack AI Assistant Features

Jetpack AI Assistant offers several impressive features for WordPress users. Some of the best functionalities of this tool include the following:

It can easily be integrated into the WordPress editor.

It has an intuitive and beginner friendly interface.

It generates content on a diverse range of topics.

JetPack AI Assistant adjusts the tone to match the style and context of the blog post.

It detects and corrects spelling and grammatical errors.

Users can request the tool to generate a title or summary for a blog post.

It translates content into multiple languages.

It creates content faster, saving the time of writers and website owners.

Jetpack AI Assistant Use Case – Real-World Applications

JetPack AI Assistant can be used for various purposes. Some of its applications include the following:

Content creators can use it to write blog posts, articles, or website content.

Editors can use it to spot errors in the content and edit them.

Businesses can use it to ensure their content is of high-quality.

It can be used to produce content in various languages.

Jetpack AI Assistant Pricing

JetPack AI Assistant has a free and paid plan. The prices of its plans vary depending on the features and number of requests they can handle. Below is an overview of both JetPack AI Assistant plans:

Free – $0 per month – It can handle up to 20 requests, create tables, blog posts, lists, adjust tones, and solve grammatical issues.

Paid – $12.54 per month – It includes everything offered in the free plan, high-volume request access, and priority support.


Does the JetPack AI Assistant Premium Plan have a request limit?

No, the premium plan doesn’t impose any limit on the number of requests sent or processed by the platform. It supports an unlimited number of requests with priority access to the support team. However, the company says that it will impose an upper limit on the number of requests in the coming months. Keep checking their announcement page for the latest information.

Can the JetPack AI Assistant adjust the tone?

Yes, the JetPack AI Assistant allows users to modify the tone of their content. You can choose between a formal or conversational tone, and the tool will edit your content accordingly.

Is the JetPack AI Assistant available for free?

Yes, the JetPack AI Assistant is available for free. However, it only supports 20 requests and offers limited features. To enjoy all the premium features and get priority access to the support team, you need to switch to the premium plan.

Is the JetPack AI Assistant available within WordPress?

Yes, you can access the JetPack AI Assistant within your WordPress editor. It is integrated within WordPress and doesn’t require you to download any software or tool separately. You have to install the JetPack AI Assistant Plugin, and you will get all its features right within the WordPress editor.

Can I use JetPack AI Assistant to write blog posts for publishing online?

You can use the JetPack AI Assistant to write blog posts for your online blog. It can generate blogs on diverse topics and publish them online. It generates unique, plagiarism-free content that can be used for personal or commercial purposes.

JetPack AI Assistant is a powerful companion for writers and editors. It can rapidly write and edit various types of content within the WordPress editor. The tool is ideal for freelancers, editors, and businesses that want to save time while producing high-quality content.

Rate this Tool

Copy Ai Review: Pricing, Features And Alternatives 2023

Copy ai is a copywriting tool and content writing assistant to create high-conversion copies.

About Copy ai

It can automatically produce highly targeted sales copies focusing on the needs & pain points of diverse customer segments. 

Launch Date – Oct 2023

Founder – Paul Yacoubian

Copy ai Features

Copy AI has over 90 templates & tools to level up your copywriting game 10X faster. Here are the Amazing features of Copy ai-

Copy ai writes SEO-optimized blog posts in a short period.

Copy ai can produce long-form sales copy to convert your potential audience into sales.

Copy ai can generate eye-catching digital ad copies for marketing campaigns.

Copy ai can produce engaging social media content.

Copy ai generates high-quality product descriptions and e-commerce copies for websites.

Copy ai lets you directly paste the final output to your publishing platform.

Copy ai offers over 30+ free AI-based writing generators to level up your marketing efforts.

Copy ai allows you to select pre-designed temples from the following categories – Business, HR, marketing, real estate, personal & sales.

Copy ai has its own AI chatbot known as “ chat by”  ( alternative to ChatGPT ) that delivers updated responses by extracting real-time data.

Copy ai chatbot lets you sum up Linkedin profiles into crisp bullet points.

Copy ai can create copies in over 29 languages making it an accessible tool worldwide.

Copy ai can generate SEO-friendly content to increase the ranks.

Copy ai Use Case – Real-World Applications.

Copy ai has emerged as a game changer for Small-to large businesses, email marketers, bloggers, social media creators, and teams. Here are some real-world applications of copy ai.

Businesses utilize copy ai to create personalized sales copy, long-form posts & product descriptions faster for their sales campaigns.

Copy ai assists in personalized cold outreach over emails and LinkedIn.

Copywriters can utilize copy ai to write compelling and high-converting emails for email marketing.

 Bloggers use copy ai for blogging to produce high-quality blogs & articles by simply entering titles & keywords.

Social media managers can write posts in bulk for over a month.

Copy ai chatbot assists market researchers by offering prebuilt prompts.

Youtubers can use copy ai’s chat feature to extract data for their Youtube videos.

 Copy ai chatbot assists in lead generation on LinkedIn.

Copy ai Pricing

Copy ai has a free plan that lets only one user write only 2000 words monthly. Here are two premium plans of Copy ai.

1. Pro Plan ( $36/month )

Five users can use it simultaneously.

Access to chat by

No word limit.

Unlimited projects.

29+ languages.

Access to new features.

Ideal for all copywriters.

2. Enterprise Plan

Customized plan 

Ideal for a team of over 5 users.

Chat interface.

Options to automate workflows.

SOC 2 security feature included.


Is copy AI free?

Copy ai lets a single user benefit from its feature through the free plan. To access the free plan, you can log in. The free plan allows you to write only 2000 words per month with limited access to new features.

Is copy ai better than ChatGPT?

Unlike Chat, Copy ai chat is trained with the latest real-time data. You can enhance your text quality by using copy ai prebuilt prompts, which are not there in ChatGPT. Copy ai can collect and summarize website information & data which is missing in ChatGPT. Further, Copy ai can also search the latest LinkedIn posts.

How to make money with Copy ai?

By learning copy ai, you can open the door to endless earning opportunities by becoming a blogger, freelance writer, or copywriter. You can write blogs, emails, social media copies, e-commerce product descriptions, etc., to earn 5 to 6 figures. You can use copy ai in affiliate marketing for writing product pages. 

Is copy AI better than Jasper?

Copy ai is better than Jasper ai primarily regarding less usage and pricing limits. You can cut down 83-92% of your monthly budget with Copy ai. Copy ai can generate unlimited words in its pro plan, while Jasper ai allows only 700,000. Copy ai offers 40+ more templates & tools than Jasper ai.

Will copy AI replace copywriters?

Copy ai has been designed to assist copywriters in fastening their work with 10X faster speed. Copywriters know how to add the element of empathy & emotions in their work which AI tools like copy ai cannot do as efficiently. Final editing work still requires human input. It’s most unlikely that Copy ai will replace copywriters.


Copy ai is the preferred AI writing tool for over 7000,000 teams & professionals worldwide. Copy ai constantly gets updated with new features, including more content types. Copy ai is a budget-friendly, must-to-have content generation tool for businesses. You can start with the free plan if you still need time to invest in Copy ai’s pro plan. 

Rate this Tool

Aws Aurora: Architecture, Pricing, Mysql, And Postgresql Compatibility

Companies may manage their data effectively and enhance the client experience with the help of Amazon’s Web Services. Using the clustered volume technique, AWS Aurora controls the data in its database and designates it for crisis backup. Like Amazon Aurora, MySQL and PostgreSQL are open-source databases. Its features enhance critical areas, including durability, protection, mobility, cost, and so on. It is less maintenance-intensive and faster than MySQL and PostgreSQL.

What is AWS Aurora? AWS Aurora Architecture

The conventional DBMS serves as the foundation for Aurora Database. The majority of the standard DBMS’s parts, including the Query Execution Engine, Transaction Manager, and Recovery Manager, are reused. However, it adds several adjustments to the conventional DBMS to enhance its scalability, availability, and reliability.

Aurora started by storing the data remotely rather than on the local disk. Aurora Database enhances the Disk Manager to work with remote storage, as seen in the image below. Aurora Database repeats the data to increase dependability. The data is typically replicated six times across three different data centers. With these many replications, it is extremely unlikely that user data will be lost. Aurora Database manages one copy of the data using a single virtual server (Amazon EC2). The EC2 instance’s local disk is where the data is kept. Aurora Database manages the replicated data using 6 EC2s spread over 3 data centers in our instance.

Aurora Database goes one step further to improve the system’s efficiency. Only the changelog is saved to the remote storage. Aurora Database only stores the changelog to 6 EC2 instances in our write example. When an EC2 instance gets a request to persist a changelog, it first saves it to the changelog on disk, as illustrated in the picture below. The changelog is then applied to the pages. This can significantly reduce network bandwidth use.

AWS Aurora Pricing

Costing $0.12 every ACU-hour, Aurora ACUs are twice as expensive as provided Aurora ACUs. This implies −

4 ACUs are required for the minimal current running cost − $350 monthly or $0.48 per hour

Thus, the basic monthly price for Aurora Serverless V2 is $350. Every auto-scale event will be priced at least $0.0005. Although provisioned Aurora has the same capacity, it costs $175 per month without the flexible serverless auto-scaling.

AWS will probably gradually lower the minimum running cost, but because of the cold start issues with the process-based design, they are unable to eliminate it. Even with supplied mode’s minimal capacity of two ACUs, the monthly cost would be $175. Additionally, this price excludes a variety of items, such as bandwidth, read replica processes, multi-region replication, main and backup storage, and read replica operations.

Pricing for On-Demand Instances

The fact that the Aurora program just requires you to pay for the capacity that is being utilized and does not require you to hunt for long-term plans or yearly billing is arguably its strongest feature. This is incredibly helpful if you need Aurora for a quick project or test, so only pay for what you require. Pricing is based on the per DB instance hour utilized until the instance is terminated or stopped.

MySQL Compatibility

MySQL Versions are compatible with Amazon Aurora and guarantee first-rate services, including MySQL 5.6 and MySQL 5.7. MyISAM is incompatible with Aurora, which only supports the InnoDB storage engine. Therefore, you must move your data to InnoDB if it is currently stored in MyISAM. Scalability and High Performance, Higher than MySQL by up to 5 times, Backtrack (which makes data backup quick and easy), Storage Auto-Scaling, Managed, Monitoring, Automatic Software Update, Migration Support, and Cost-Effective (pay per use).

PostgreSQL Compatibility

Amazon Aurora PostgreSQL is compatible with versions 9.6 and 10 of PostgreSQL. Since it improves database performance effectiveness, fixing PostgreSQL with Aurora is extremely adaptive.

The Amazon Relational Database Service gateway was used to launch the Aurora system, which is PostgreSQL compatible. Amazon Aurora is the system, with PostgreSQL as the version. High performance and scalability, three times PostgreSQL’s performance, Backtrack (which speeds up data backup), storage auto-scaling, highly secure, managed, monitoring, automatic software update, automatic, automatic software, migration support, and cost-effectiveness (pay per use) are all features of this database (as your storage demands rise, Amazon Aurora automatically expands the size of your database volume).


Update the detailed information about Dynamic Bus Fare Pricing Comparison And Detection on the website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!