Trending December 2023 # Senators Once More Try To Ban End # Suggested January 2024 # Top 13 Popular

You are reading the article Senators Once More Try To Ban End updated in December 2023 on the website Moimoishop.com. We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested January 2024 Senators Once More Try To Ban End

A group of Republican senators are making yet another attempt to ban end-to-end encryption in messaging services, which would make illegal Apple’s Messages and FaceTime services, as well as a wide range of other message apps like WhatsApp, Signal and Telegram.

No surprise, either, that they are again demonstrating that they don’t understand how end-to-end encryption works …

Three senators have proposed the Lawful Access to Encrypted Data Act.

Senate Judiciary Committee Chairman Lindsey Graham (R-South Carolina) and U.S. Senators Tom Cotton (R-Arkansas) and Marsha Blackburn (R-Tennessee) today introduced the Lawful Access to Encrypted Data Act, a bill to bolster national security interests and better protect communities across the country by ending the use of “warrant-proof” encrypted technology by terrorists and other bad actors to conceal illicit behavior.

“Terrorists and criminals routinely use technology, whether smartphones, apps, or other means, to coordinate and communicate their daily activities. In recent history, we have experienced numerous terrorism cases and serious criminal activity where vital information could not be accessed, even after a court order was issued. Unfortunately, tech companies have refused to honor these court orders and assist law enforcement in their investigations. My position is clear: After law enforcement obtains the necessary court authorizations, they should be able to retrieve information to assist in their investigations,” said Graham.

The claim is, of course, nonsense. Tech companies do not ‘refuse’ to assist law enforcement. Apple cooperates with numerous law enforcement investigations, including handing over complete copies of iCloud backups.

Many service providers and device manufacturers continue refusing to cooperate with law enforcement to help recover encrypted data, even when presented with a lawful warrant supported by probable cause.

Again, no. The don’t provide access to end-to-end encrypted messages because they can’t. That is, literally, the whole point of end-to-end encryption: it protects privacy by ensuring that only the parties involved in the communication can decrypt the contents.

The bill also makes it sound like it is adding a new safeguard.

The bill would require service providers and device manufacturers to provide assistance to law enforcement when access to encrypted devices or data is necessary – but only after a court issues a warrant, based on probable cause that a crime has occurred, authorizing law enforcement to search and seize the data.  

It isn’t: that’s the exact legal position today.

It’s not the first time senators have tried to outlaw strong encryption. The first such attempt in the US was made back in 2023, following Apple’s refusal to create a backdoor into iOS to unlock an iPhone 5C used by one of the San Bernardino shooters. The FBI later accessed the phone using a commercial company.

Three years later, in 2023, the Trump administration proposed making another attempt to ban end-to-end encryption. Later the same year, the Senate Judiciary Committee again threatened legal action against companies using strong encryption. Other governments around the world have proposed the same thing, demonstrating exactly the same failure to grasp how end-to-end encryption works.

Technically, there would be one way to break end-to-end encryption, known as ‘the ghost proposal.’ This would require Apple and other companies to deceive their customers by creating fake devices linked to their Apple IDs. However, as we’ve pointed out before, if messaging services did this, they would no longer be using end-to-end encryption.

However, that would only be possible because it would break authentication of participants in the chat, which is a key component of end-to-end encrypted messaging. If you take an end-to-end encrypted messaging service and compromise the authentication process, you no longer have an end-to-end encrypted messaging service. The whole point of end-to-end encryption is that only authorized participants can decrypt it.

Photo: Politico

FTC: We use income earning auto affiliate links. More.

You're reading Senators Once More Try To Ban End

A Guide To Building An End

This article was published as a part of the Data Science Blogathon.

Knock! Knock!

Who’s there?

It’s Natural Language Processing!

Today we will implement a multi-class text classification model on an open-source dataset and explore more about the steps and procedure. Let’s begin.

Table of Contents

Dataset

Loading the data

Feature Engineering

Text processing

Exploring Multi-classification Models

Compare Model performance

Evaluation

Prediction

Dataset for Text Classification

The dataset consists of real-world complaints received from the customers regarding financial products and services. The complaints are labeled to a specific product. Hence, we can conclude that this is a supervised problem statement, where we have the input and the target output for that. We will play with different machine learning algorithms and check which algorithm works better.

Our aim is to classify the complaints of the consumer into predefined categories using a suitable classification algorithm. For now, we will be using the following classification algorithms.

Linear Support Vector Machine (LinearSVM)

Random Forest

Multinomial Naive Bayes

Logistic Regression.

Loading the Data

Download the dataset from the link given in the above section. Since I am using Google Colab, if you want to use the same you can use the Google drive link given here and import the dataset from your google drive. The below code will mount the drive and unzip the data to the current working directory in colab.

from google.colab import drive drive.mount('/content/drive') !unzip /content/drive/MyDrive/rows.csv.zip

First, we will install the required modules.

Pip install numpy

Pip install pandas

Pip install seaborn

Pip install scikit-learn

Pip install scipy

Ones everything successfully installed, we will import required libraries.

import os import pandas as pd import numpy as np from scipy.stats import randint import seaborn as sns # used for plot interactive graph. import matplotlib.pyplot as plt import seaborn as sns from io import StringIO from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.feature_selection import chi2 from IPython.display import display from sklearn.model_selection import train_test_split from sklearn.feature_extraction.text import TfidfTransformer from sklearn.naive_bayes import MultinomialNB from sklearn.linear_model import LogisticRegression from sklearn.ensemble import RandomForestClassifier from chúng tôi import LinearSVC from sklearn.model_selection import cross_val_score from sklearn.metrics import confusion_matrix from sklearn import metrics

Now after this let us load the dataset and see the shape of the loaded dataset.

# loading data df = pd.read_csv('/content/rows.csv') print(df.shape)

From the output of the above code, we can say that the dataset is very huge and it has 18 columns. Let us see how the data looks like. Execute the below code.

df.head(3).T

Now, for our multi-class text classification task, we will be using only two of these columns out of 18, that is the column with the name ‘Product’ and the column ‘Consumer complaint narrative’. Now let us create a new DataFrame to store only these two columns and since we have enough rows, we will remove all the missing (NaN) values. To make it easier to understand we will rename the second column of the new DataFrame as ‘consumer_complaints’.

# Create a new dataframe with two columns df1 = df[['Product', 'Consumer complaint narrative']].copy() # Remove missing values (NaN) df1 = df1[pd.notnull(df1['Consumer complaint narrative'])] # Renaming second column for a simpler name df1.columns = ['Product', 'Consumer_complaint'] print(df1.shape) df1.head(3).T

We can see that after discarding all the missing values, we have around 383k rows and 2 columns, this will be our data for training. Now let us check how many unique products are there.

pd.DataFrame(df1.Product.unique()).values

There are 18 categories in products. To make the training process easier, we will do some changes in the names of the category.

# Because the computation is time consuming (in terms of CPU), the data was sampled df2 = df1.sample(10000, random_state=1).copy() # Renaming categories df2.replace({'Product': {'Credit reporting, credit repair services, or other personal consumer reports': 'Credit reporting, repair, or other', 'Credit reporting': 'Credit reporting, repair, or other', 'Credit card': 'Credit card or prepaid card', 'Prepaid card': 'Credit card or prepaid card', 'Payday loan': 'Payday loan, title loan, or personal loan', 'Money transfer': 'Money transfer, virtual currency, or money service', 'Virtual currency': 'Money transfer, virtual currency, or money service'}}, inplace= True) pd.DataFrame(df2.Product.unique())

The 18 categories are now reduced to 13, we have combined ‘Credit Card’ and ‘Prepaid card’ to a single class and so on.

Now, we will map each of these categories to a number, so that our model can understand it in a better way and we will save this in a new column named ‘category_id’. Where each of the 12 categories is represented in numerical.

# Create a new column 'category_id' with encoded categories df2['category_id'] = df2['Product'].factorize()[0] category_id_df = df2[['Product', 'category_id']].drop_duplicates() # Dictionaries for future use category_to_id = dict(category_id_df.values) id_to_category = dict(category_id_df[['category_id', 'Product']].values) # New dataframe df2.head()

Let us visualize the data, and see how many numbers of complaints are there per category. We will use Bar chart here.

fig = plt.figure(figsize=(8,6)) colors = ['grey','grey','grey','grey','grey','grey','grey','grey','grey', 'grey','darkblue','darkblue','darkblue'] df2.groupby('Product').Consumer_complaint.count().sort_values().plot.barh( ylim=0, color=colors, title= 'NUMBER OF COMPLAINTS IN EACH PRODUCT CATEGORYn') plt.xlabel('Number of ocurrences', fontsize = 10);

Above graph shows that most of the customers complained regarding:

Credit reporting, repair, or other

Debt collection

Mortgage

Text processing

The text needs to be preprocessed so that we can feed it to the classification algorithm. Here we will transform the texts into vectors using Term Frequency-Inverse Document Frequency (TFIDF) and evaluate how important a particular word is in the collection of words. For this we need to remove punctuations and do lower casing, then the word importance is determined in terms of frequency.

We will be using TfidfVectorizer function with the below parameters:

min_df: remove the words which has occurred in less than ‘min_df’ number of files.

Sublinear_tf: if True, then scale the frequency in logarithmic scale.

Stop_words: it removes stop words which are predefined in ‘english’.

tfidf = TfidfVectorizer(sublinear_tf=True, min_df=5, ngram_range=(1, 2), stop_words='english') # We transform each complaint into a vector features = tfidf.fit_transform(df2.Consumer_complaint).toarray() labels = df2.category_id print("Each of the %d complaints is represented by %d features (TF-IDF score of unigrams and bigrams)" %(features.shape))

Now, we will find the most correlated terms with each of the defined product categories. Here we are finding only three most correlated terms.

# Finding the three most correlated terms with each of the product categories N = 3 for Product, category_id in sorted(category_to_id.items()): features_chi2 = chi2(features, labels == category_id) indices = np.argsort(features_chi2[0]) feature_names = np.array(tfidf.get_feature_names())[indices] unigrams = [v for v in feature_names if len(v.split(' ')) == 1] bigrams = [v for v in feature_names if len(v.split(' ')) == 2] print(" * Most Correlated Unigrams are: %s" %(', '.join(unigrams[-N:]))) print(" * Most Correlated Bigrams are: %s" %(', '.join(bigrams[-N:])))

* Most Correlated Unigrams are: overdraft, bank, scottrade * Most Correlated Bigrams are: citigold checking, debit card, checking account * Most Correlated Unigrams are: checking, branch, overdraft * Most Correlated Bigrams are: 00 bonus, overdraft fees, checking account * Most Correlated Unigrams are: dealership, vehicle, car * Most Correlated Bigrams are: car loan, vehicle loan, regional acceptance * Most Correlated Unigrams are: express, citi, card * Most Correlated Bigrams are: balance transfer, american express, credit card * Most Correlated Unigrams are: report, experian, equifax * Most Correlated Bigrams are: credit file, equifax xxxx, credit report * Most Correlated Unigrams are: collect, collection, debt * Most Correlated Bigrams are: debt collector, collect debt, collection agency * Most Correlated Unigrams are: ethereum, bitcoin, coinbase * Most Correlated Bigrams are: account coinbase, coinbase xxxx, coinbase account * Most Correlated Unigrams are: paypal, moneygram, gram * Most Correlated Bigrams are: sending money, western union, money gram * Most Correlated Unigrams are: escrow, modification, mortgage * Most Correlated Bigrams are: short sale, mortgage company, loan modification * Most Correlated Unigrams are: meetings, productive, vast * Most Correlated Bigrams are: insurance check, check payable, face face * Most Correlated Unigrams are: astra, ace, payday * Most Correlated Bigrams are: 00 loan, applied payday, payday loan * Most Correlated Unigrams are: student, loans, navient * Most Correlated Bigrams are: income based, student loan, student loans * Most Correlated Unigrams are: honda, car, vehicle * Most Correlated Bigrams are: used vehicle, total loss, honda financial

Exploring Multi-classification Models

The classification models which we are using:

Random Forest

Linear Support Vector Machine

Multinomial Naive Bayes

Logistic Regression.

For more information regarding each model, you can refer to their official guide.

Now, we will split the data into train and test sets. We will use 75% of the data for training and the rest for testing. Column ‘consumer_complaint’ will be our X or the input and the product is out Y or the output.

X = df2['Consumer_complaint'] # Collection of documents y = df2['Product'] # Target or the labels we want to predict (i.e., the 13 different complaints of products) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state = 0)

We will keep all the using models in a list and loop through the list for each model to get a mean accuracy and standard deviation so that we can calculate and compare the performance for each of these models. Then we can decide with which model we can move further.

models = [ RandomForestClassifier(n_estimators=100, max_depth=5, random_state=0), LinearSVC(), MultinomialNB(), LogisticRegression(random_state=0), ] # 5 Cross-validation CV = 5 cv_df = pd.DataFrame(index=range(CV * len(models))) entries = [] for model in models: model_name = model.__class__.__name__ accuracies = cross_val_score(model, features, labels, scoring='accuracy', cv=CV) for fold_idx, accuracy in enumerate(accuracies): entries.append((model_name, fold_idx, accuracy)) cv_df = pd.DataFrame(entries, columns=['model_name', 'fold_idx', 'accuracy'])

The above code will take sometime to complete its execution.

Compare Text Classification Model performance

Here, we will compare the ‘Mean Accuracy’ and ‘Standard Deviation’ for each of the four classification algorithms.

mean_accuracy = cv_df.groupby('model_name').accuracy.mean() std_accuracy = cv_df.groupby('model_name').accuracy.std() acc = pd.concat([mean_accuracy, std_accuracy], axis= 1, ignore_index=True) acc.columns = ['Mean Accuracy', 'Standard deviation'] acc

From the above table, we can clearly say that ‘Linear Support Vector Machine’ outperforms all the other classification algorithms. So, we will use LinearSVC to train model multi-class text classification tasks.

plt.figure(figsize=(8,5)) sns.boxplot(x='model_name', y='accuracy', data=cv_df, color='lightblue', showmeans=True) plt.title("MEAN ACCURACY (cv = 5)n", size=14);

Evaluation of Text Classification Model

Now, let us train our model using ‘Linear Support Vector Machine’, so that we can evaluate and check it performance on unseen data.

X_train, X_test, y_train, y_test,indices_train,indices_test = train_test_split(features, labels, df2.index, test_size=0.25, random_state=1) model = LinearSVC() model.fit(X_train, y_train) y_pred = model.predict(X_test)

We will generate claasifiaction report, to get more insights on model performance.

# Classification report print('ttttCLASSIFICATIION METRICSn') print(metrics.classification_report(y_test, y_pred, target_names= df2['Product'].unique()))

From the above classification report, we can observe that the classes which have a greater number of occurrences tend to have a good f1-score compared to other classes. The categories which yield better classification results are ‘Student loan’, ‘Mortgage’ and ‘Credit reporting, repair, or other’. The classes like ‘Debt collection’ and ‘credit card or prepaid card’ can also give good results. Now let us plot the confusion matrix to check the miss classified predictions.

conf_mat = confusion_matrix(y_test, y_pred) fig, ax = plt.subplots(figsize=(8,8)) sns.heatmap(conf_mat, annot=True, cmap="Blues", fmt='d', xticklabels=category_id_df.Product.values, yticklabels=category_id_df.Product.values) plt.ylabel('Actual') plt.xlabel('Predicted') plt.title("CONFUSION MATRIX - LinearSVCn", size=16);

From the above confusion matrix, we can say that the model is doing a pretty decent job. It has classified most of the categories accurately.

Prediction

Let us make some prediction on the unseen data and check the model performance.

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state = 0) tfidf = TfidfVectorizer(sublinear_tf=True, min_df=5, ngram_range=(1, 2), stop_words='english') fitted_vectorizer = tfidf.fit(X_train) tfidf_vectorizer_vectors = fitted_vectorizer.transform(X_train) model = LinearSVC().fit(tfidf_vectorizer_vectors, y_train)

Now run the prediction.

complaint = """I have received over 27 emails from XXXX XXXX who is a representative from Midland Funding LLC. From XX/XX/XXXX I received approximately 6 emails. From XX/XX/XXXX I received approximately 6 emails. From XX/XX/XXXX I received approximately 9 emails. From XX/XX/XXXX I received approximately 6 emails. All emails came from the same individual, XXXX XXXX. It is becoming a nonstop issue of harassment.""" print(model.predict(fitted_vectorizer.transform([complaint]))) complaint = """Respected Sir/ Madam, I am exploring the possibilities for financing my daughter 's XXXX education with private loan from bank. I am in the XXXX on XXXX visa. My daughter is on XXXX dependent visa. As a result, she is considered as international student. I am waiting in the Green Card ( Permanent Residency ) line for last several years. I checked with Discover, XXXX XXXX websites. While they allow international students to apply for loan, they need cosigners who are either US citizens or Permanent Residents. I feel that this is unfair. I had been given mortgage and car loans in the past which I closed successfully. I have good financial history. print(model.predict(fitted_vectorizer.transform([complaint]))) complaint = """They make me look like if I was behind on my Mortgage on the month of XX/XX/2023 & XX/XX/XXXX when I was not and never was, when I was even giving extra money to the Principal. The Money Source Web site and the managers started a problem, when my wife was trying to increase the payment, so more money went to the Principal and two payments came out that month and because I reverse one of them thru my Bank as Fraud they took revenge and committed slander against me by reporting me late at the Credit Bureaus, for 45 and 60 days, when it was not thru. Told them to correct that and the accounting department or the company revert that letter from going to the Credit Bureaus to correct their injustice. The manager by the name XXXX requested this for the second time and nothing yet. I am a Senior of XXXX years old and a Retired XXXX Veteran and is a disgraced that Americans treat us that way and do not want to admit their injustice and lies to the Credit Bureau.""" print(model.predict(fitted_vectorizer.transform([complaint])))

The model is not perfect, yet it is performing very good.

The notebook is available here.

Conclusion

We have implemented a basic multi-class text classification model, you can play with other models like Xgboost, or you can try to compare multiple model performance on this dataset using a machine learning framework called AutoML. This is not yet, still there are complex problems associated within the multi-class text classification tasks, you can always explore more and acquire new concepts and ideas about this topic. That’s It!!

Thank you!

All images are created by the author.

My LinkedIn

The media shown in this article is not owned by Analytics Vidhya and are used at the Author’s discretion

Related

Is The Tableau Era Coming To An End?

The announcement last week that Tableau’s CEO Adam Selipsky is stepping down felt more significant than the casual media coverage it received. To me, it was a signal that the murmurings of discontent I’ve been hearing were true: Era of Tableau is over.

The Glory Days

While Tableau first came about in 2003, they really hit their stride in the early 2010s — and what astride it was. Users heralded the tool as ‘revolutionary’ and ‘life-changing.’ Their annual conferences sold out in minutes. Participants would come together with hundreds of others, proudly brandishing swag that read ‘We Are Data People’ as they attended roller-blading socials and “Iron Viz” competitions. As I said, it was having a real moment.

For many of us (I, too drank the kool-aid), it was affirming and exciting to see data being celebrated, not relegated to the sidelines. Tableau told us being in data was not just cool, but also irrefutably important.

What’s Changed?

But instead of this being an even more glorious Glory Days, it’s an all-too-often underwhelming experience all around:

“Machine learning specialists topped its list of developers who said they were looking for a new job, at 14.3 per cent. Data scientists were a close second, at 13.2 per cent.” [1]

And even more damning:

“Among the 90% of companies that have made some investment in AI, fewer than 2 out of 5 report business gains from AI in the past three years.” [2]

Eesh. Clearly, there’s work to be done.

The Haunting

So what are these ghosts that are getting in our way?

Data === Dashboard

To many business users data is now synonymous with dashboards. While a seemingly benign misunderstanding, this actually causes a whole slew of downstream effects, namely:

Thinking Tableau will ‘fix’ your data problems. Many companies make the mistake of assuming the only thing your data team needs is Tableau (or Power BI). This kind of thinking ignores the more common pain points of bringing data sources together, cleaning and transforming the data, and doing the actual analysis itself, which, if you ask any analyst, are the most traumatic parts of any analysis. By not investing in these problems, you’re telling your data team that their work is less important than the business’s interpretation of it.

Asking dashboards to do too much. Since Tableau is the only tool many teams have to present data they are forced to turn everything into a dashboard which significantly reduces the impact a more nuanced, thoughtful analysis could have. By stripping away context, explanation, and narrative from the analyst, dashboards become a Rorschach test where everyone can see what they want to see.

While users are now more comfortable looking at basic charts, we’ve made little progress in educating our business partners in fundamental data concepts. Dashboards don’t give us the stage needed to explain, for example, why correlation does not equal causation. This means it’s become nearly impossible to explain the significance of our more complicated predictive models or statistical analysis which are required to realize the dreams of our current era.

Hyper Specialization of Tools

One of the great things about Tableau at the start was that it just sat on top of your database, making it easy to ‘plug in’ to your existing stack of data tools without much effort. This model has been used by pretty much every data tool since, creating separate tools for data pipelines, data cleaning, data transformation, data analysis, and of course, data visualization. This approach is completely fragmenting analyst’s workflows, causing significant pain and delays in each analysis. As a result, most analysts and data scientists have adopted a ‘not my data tool’ mentality — acknowledging Tableau as a necessary evil to get their work noticed. Check out this Reddit thread to see for yourself.

“If there were a button that would nuke all the Tableau servers in the world, I am pressing that button.” -Anynomous Data Professional

Remember those ‘murmurings of discontent’ I mentioned at the start…

Ghostbusters

We have an increasingly urgent need to find solutions to these issues before we find ourselves again fighting for relevancy and attention to data. To do that, we need to start focusing on the following two areas:

Present more than numbers

It’s time to give data more of a voice. Dashboards are great for things where there is a shared context and a straightforward decision. But for many things, those conditions are not met, and therefore we need a new approach.

I, and others, have been banging the drum on data notebooks as a solution for some time now. They can tell the story, explain the methodology, and build nice visuals without sacrificing interactivity or presentability.

By using more notebooks we can start to wean off a culture that’s been jonesing for dashboards. We can start to work with our business partners instead of lobbing questions and charts back and forth over an imaginary wall.

Pick tools the data team wants

Data analysts and scientists see a red flag when a potential employer has Tableau and little else in the way of data engineering, or data analysis tools (e.g. running Tableau on your un-transformed MySQL 5 database). This signals that they aren’t prioritizing the work that these analysts will do. This needs to stop. ASAP.

Depending on the analysis your team is doing, the ‘right’ tools will differ. But there are so many options out there, you just need to make sure you’re investing in the work it takes to make the great analysis as much as you are on a tool make the business look at it.

And hey, you’ll probably end up keeping some of those data scientists that are, according to the stats, most likely shopping around.

Conclusion

We all owe a great deal to Tableau for the current attention data receives in our businesses. To make good on this opportunity though, and move into a new Golden Age of data, we need to address and remedy some of the ghosts of the Tableau era that are holding us back.

Data notebooks present an option that can give your team the flexibility it needs to start to move past the Tableau and into the next era.

At Count, we’re excited to be part of this new movement of data tools designed for modern challenges. You can learn more about the Count notebook here.

References

[1] Walter, Richard, “

[1] Walter, Richard, “ How machine learning creates new professions — and problems ,” Financial Times, November 2023. [2] S. Ransbotham, S. Khodabandeh, R. Fehling, B. LaFountain, D. Kiron, “ Winning With AI, ” MIT Sloan Management Review and Boston Consulting Group, October 2023.

[3] Header image by Luke Chesser on Unsplash

The media shown in this article are not owned by Analytics Vidhya and is used at the Author’s discretion.

Related

How To Make An End Portal In Minecraft

The End portal is easily one of the most important structures in Minecraft. Yet unlike all the other structures in the game, most players are generally clueless when we talk about its creation. Everyone is focused on finding the Ender portal with Minecraft speedrun seeds. But we are here to change that by explaining how to make an End portal in Minecraft. From unique custom portals to transporting ender portals, we have covered everything in this guide. Though it’s unlike any build you have ever made, so let’s not beat around the bush and jump in!

How to Make an End Portal in Minecraft (2023) What is an End Portal in Minecraft

So you first have to find a Stronghold in Minecraft and then locate the End portal. After locating it, you have to place 12 Eyes of Ender in the structure to activate the portal. Without the activation, the portal isn’t useful in any way.

Can You Make an End Portal Manually?

Items Needed to Make Minecraft Ender Portal

12 End Portal Frames

12 Eyes of Ender

How to Make the Eyes of Ender

An End portal is a hollow square structure that has a row of three End portal frames on each of its four sides. Meanwhile, the 9 x 9 middle area is empty. Unfortunately, you can’t obtain End portal frames without commands or creative mode, which is the reason you can’t make the Ender portal manually in other game modes.

On the other hand, you can craft the Eyes of Ender in any game mode by combining Ender pearls and blaze dust on a crafting table. Our in-depth guide on how to make the Eyes of Ender and use them is coming soon, so stay tuned. For now, to save time, we suggest you take them from the creative inventory (redesigned in Minecraft 1.20) or commands as well.

How to Obtain End Portal Frames

Follow the steps below to obtain the End portal frame in Minecraft:

1. First, you need to enable cheats in your world. This option is available within the world settings on the Bedrock edition and the LAN World option in the Java edition. You can access this setting from the pause menu and simply need to toggle “Activate Cheats” or “Allow Cheats” in your world.

/gamemode creative

3. Finally, press the “E” key or your dedicated inventory key to open the creative inventory and get the End portal frames. You only need one frame as it doesn’t get exhausted. Moreover, while you are at it, make sure to collect the Eyes of Ender as well.

4. Alternatively, if you are on one of the earlier versions that don’t show frames in the creative inventory, you have to rely on commands to get them. Use the following commands to get End portal frames and Eyes of Ender in Minecraft.

/give @s minecraft:end_portal_frame

How to Make an End Portal in Minecraft

With the required items in your inventory, here’s how to make an End portal in Minecraft:

1. First, make two rows with three End portal frames, which are parallel to each other. While doing so, remember to leave a gap of three blocks between the rows.

2. Then, create two new rows of three End portal frames, which are adjacent to the existing rows and parallel to each other. They should complete the square End portal structure, leaving a gap of three blocks in between.

How to Activate an End Portal in Minecraft

1. First, turn on the coordinates in your Minecraft world. You need to turn on a toggle in the world options in the Bedrock edition. The Java Edition users only need to press the F3 key to see the coordinates.

2. Then, go to any of the corners of the main portal area to find its coordinates (A B C). After that, go to the opposite corner and note down its coordinates as well (X Y Z).

4. Finally, use the following command to fill the portal area with the End dimension gateway and activate the End portal in Minecraft.

/fill A B C X Y Z minecraft:end_portal

Make Some Unique Custom End Portals

As you might have guessed at this point, the most important part of the End portal is the dimensional fluid. So you can use any other block in the place of the End portal frame to create a variety of unique portals. It is the perfect addition to your custom Minecraft maps and large bases.

Make and Activate Ender Portal in Minecraft

Golang Program To Demonstrate Begin And End Blocks

In this Go language article, we will write programs to demonstrate BEGIN and END blocks using sum of numbers, iteration and two variables.

Blocks are used to group the statements where the variables described in the block can only be used in that scope and not outside it. They also help in code readability

Algorithm

Step 1 − Create a package main and declare fmt(format package) package in the program where main produces executable codes and fmt helps in formatting input and output.

Step 2 − Create a main function and in this function create two begin and end blocks enclosed within curly braces.

Step 3 − In the first block take two numbers in two variables named a and b, add these numbers and assign them to c.

Step 4 − Print the c on the console using Printf function from fmt package.

Step 5 − In the second block take the name and age of a person and print it similarly like we did in last step.

Example 1

In this Example, we will create two begin and end blocks in the main function. In the first block we will print the sum of two numbers and in the second block we will print the name and age of a person.

package main import "fmt" func main() { { a := 10 b := 20 c := a + b fmt.Printf("The sum of %d and %d is %dn", a, b, c) } { name := "Ronit" age := 34 fmt.Printf("%s is %d years oldn", name, age) } } Output The sum of 10 and 20 is 30 Ronit is 34 years old Example 2

In this illustration, we will create two begin and end block and an anonymous function will be used to execute the begin and end chúng tôi the first block some statements will be executed whereas in the second block numbers will be printed using iteration.

package main import "fmt" func main() { func() { defer fmt.Println("This statement is executed last in the program") fmt.Println("This statement is executed first in the program") }() func() { for i := 1; i<= 6; i++ { defer fmt.Printf("%d ", i) } }() } Output This statement is executed first in the program This statement is executed last in the program 6 5 4 3 2 1 Example 3

In this Example, we will write a Go language program to demonstrate the BEGIN and END blocks using two variables a and b.

package main import "fmt" func main() { { a := 20 fmt.Println("Value of x inside begin block:", a) } { b := "Hello, alexa!" fmt.Println("Value of y inside begin block:", b) } } Output Value of x inside begin block: 20 Value of y inside begin block: Hello, alexa! Conclusion

We executed the program of demonstrating the begin and end blocks. In the first Example, we created two blocks and printed the output using the fmt package whereas in the second Example, we used defer keyword while printing the output of blocks and in the third Example, we used two variables to execute the program.

Huawei Wants To Negotiate With The United States About Easing The Ban

According to NikkeiAsia, the reason for the negotiations will also be the detention of the chief financial officer of the Chinese tech giant Meng Wanzhou. Tim Danks, Vice President of risk management and partner relations at Huawei Technologies USA, said: “We want to have a discussion with the U.S. administration separately from the Chinese government. We don’t want to be lumped into that discussion”. At this point, the company has yet to have a chance to speak with the new Biden administration, but hopes to have discussions soon.

Last year, the Trump administration placed the Chinese brand on an export blacklist called the Entity List over security allegations. This made it impossible for Huawei to buy critical components from US firms. But the company is now hoping for a “tweak and temporary license” that will allow US companies to continue selling to the Chinese brand.

Tim Danks also added that “in the short term, US sales will not be a priority for Huawei. Our priority is the supply chain. ” However, Biden’s candidate for Secretary of Commerce, Gina Raimondo, previously stated that she “will use the full toolbox at her disposal to protect America and our networks from Chinese interference or any behind-the-scenes influence on our network, be it Huawei, ZTE. or any other company. ” So it remains to be seen what position the new administration will take, even if negotiations with Huawei take place.

Huawei: We have more than a billion active smartphone worldwide

Huawei Founder and CEO Ren Zhengfei recently spoke at the GTS Cloud and Device Cloud Cooperation and Integration Progress conference. During his speech, he stated that Huawei has a wide range of devices; and the number of active smartphones has already exceeded one billion.

Gizchina News of the week

Huawei has notified its suppliers that orders for smartphone parts will fall more than 60% in 2023. Huawei will ship 70 to 80 million smartphones this year, down from 189 million last year, according to official figures.

Huawei: We will never sell our smartphone business

Ren Zhengfei, founder and CEO of Huawei, said on Tuesday that Huawei will survive the sanctions imposed by Donald Trump and look forward to a renewed relationship with the United States when new President Joe Biden comes to power.

Joe Biden took over as head of the White House last month. Huawei now expects the new US president to improve relations between the two countries; as well as American and Chinese companies. Ren Zhengfei said Huawei remains determined to buy equipment from US companies and that restoring Huawei’s access to US goods is mutually beneficial. In addition, he suggests that the restrictions on the Chinese tech giant will hurt US suppliers.

“We hope the new US administration would have an open policy for the benefit of American firms and the economic development of the United States,” said Ren. “We still hope that we can buy large volumes of American materials, components, and equipment so that we can all benefit from China’s growth.”

The leader of the company also denied information that Huawei is going out of the smartphone business.

“We have decided we absolutely will not sell off our consumer devices, our smartphone business,” he said.

The company will unveil the flagship foldable smartphone Huawei Mate X2 on February 22nd; and the Huawei P50 is expected to be announced in March.

Update the detailed information about Senators Once More Try To Ban End on the Moimoishop.com website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!