Trending December 2023 # At&T Galaxy Note Android 4.1.2 Jelly Bean Update Is Official # Suggested January 2024 # Top 16 Popular

You are reading the article At&T Galaxy Note Android 4.1.2 Jelly Bean Update Is Official updated in December 2023 on the website Moimoishop.com. We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested January 2024 At&T Galaxy Note Android 4.1.2 Jelly Bean Update Is Official

We may be on the brink of being introduced to Android 4.3, but there are many devices out there that are only just getting updated to Android 4.1 – one of the those is the AT&T variant of the Galaxy Note, which has finally received its official Android 4.1.2 Jelly Bean update,

The Android 4.1.2 update for the first-generation Galaxy Note brings many new features from the Galaxy S3 and Note 2 – multi-window multitasking, customizable notification toggles, new widgets, two homescreen modes, smart features like Direct Call, Smart Stay and Pop-Up play, and also the new and redesigned TouchWiz interface that is found on all new Samsung devices.

You also get all the usual Jelly Bean goodies such as a buttery smooth and responsive interface, Google Now, rich actionable and expandable notifications, faster browser experience, offline voice typing, high-resolution contact photos, resizable widgets on the homescreen, and increased stability that will provide for a much improved overall user experience.

If you haven’t received the update yet and would like to update manually (especially if you’re using a custom ROM right now, in which case a manual update is the only option), you can use the full firmware file to update with the help of our guide. It’s a pretty easy procedure and you should have your Galaxy Note updated in no time.

Let’s see how the official Android 4.1.2 update can be installed on the AT&T Galaxy Note.

Compatibility

The guide below is compatible only and only with AT&T Samsung Galaxy Note, model number SGH-I717. It will not work with other carrier variants or on the international Galaxy Note. Check your device’s model number in: Settings » About phone.

Warning

The methods and procedures discussed here are considered risky and you should not attempt anything if you don’t know completely what it is. If any damage occurs to your device, we won’t be held liable.

Extract the file downloaded in step 3 on the computer to obtain a file named KIES_HOME_I717UCMD3_I717ATTMD3_1117019_REV02_user_low_ship.tar.md5 (the file name may end at .tar, which is normal). This is the actual firmware file that we need to flash on the phone.

Extract the contents of the Odin3_v3.04.zip file to a folder on your computer. You should obtain a total of 4 files after extracting it.

Turn off your phone and wait for it to shut down completely.

Then, put the phone into download (odin) mode. To do so, press and hold these keys together: Volume Down + Home + Power till the phone turns on and shows a Warning!! screen. Then press Volume Up to enter download mode.

Important! Connect your phone to PC now. You should get the message “Added !!” under Odin’s message box in the bottom left.

If you don’t get this message, make sure you installed drivers correctly (using Kies or directly) as given in step 1. If it still doesn’t work, try changing to another USB port on the computer and also use the USB ports on the back if you have a desktop PC.

Do not make any other changes in Odin except selecting the required files as given in step 11. Leave all other options as they are. Make sure Re-Partition check box is not selected.

Now, hit the START button to start flashing the firmware on the phone. When the flashing is complete, your phone will automatically reboot — and when you see the Samsung logo, you can safely unplug the cable. Plus, you’ll get a PASS (with green background) message in the left-most box at the very top of Odin.

What to do if Odin gets stuck: If ODIN gets stuck and doesn’t seem to be doing anything, or you get a FAIL message (with red background) in ODIN, do the following – disconnect the phone from the PC, close ODIN, remove battery, re-insert it, and do the procedure again from Step 8.

[Important] After you get the PASS message and the phone reboots, the phone might get stuck at the booting animation. If that happens, perform the following steps to make it boot. Remember that these steps will wipe your personal data like contacts, apps, messages, etc. If your phone has already booted, skip these steps, your phone has been restored/fixed successfully:

Boot to recovery mode — for which, first power off phone (by removing battery and reinserting it), wait for 5-6 seconds, and then press and hold Volume Up + Power keys together till the phone vibrates, then let go of the power button (but not the volume button) till the phone boots in recovery. Once you are in recovery mode, use volume keys to move the selection up and down and power key to select the option.

Go to Wipe data/Factory Reset and select it. Select Yes on next screen.

Then, select reboot system now to reboot the phone, which will now boot properly.

If you run into any roadblocks while flashing the firmware, let us know and we’ll help you out.

Via: Sammobile

You're reading At&T Galaxy Note Android 4.1.2 Jelly Bean Update Is Official

Which Gender Excels At Teamwork?

Benefits of a diverse team

Many types of diversity can increase a company’s effectiveness and productivity. Fostering diversity in areas such as race, nationality, ethnicity, sexual orientation, religion and age can bring several benefits to your company. Below are several ways a diverse team can improve your business.

1. It increases innovation.

Diverse teams bring a variety of experiences and perspectives to the table. A diverse team is able to approach tasks from multiple perspectives instead of from one way of thinking. As a result, these teams can more easily generate innovative and unique ideas.

2. It enhances decision-making.

With their varied backgrounds, members of diverse teams can approach datasets from their individual perspectives. Their takeaways are likely to result in different interpretations that can lead the team to consider new or alternative options. A team of similar members might not have considered these potentially game-changing routes.

3. It improves engagement among employees.

A diverse workplace is more likely to accept and consider the opinions of minority voices, helping these team members feel more comfortable in their workplaces. When they feel heard and respected, their confidence, motivation and engagement at work will often increase.

4. It attracts job seekers.

A Glassdoor survey found that about 80% of Black, Hispanic and LGBTQ+ job applicants say they highly consider a workplace’s diversity when evaluating job opportunities. The survey also found that 32% of job seekers would not consider working at a company that lacks diversity among its employees. A diverse workplace can improve your team’s performance while also setting you apart among job seekers’ many options.

5. It increases employee retention.

A 2023 Deloitte survey correlated Gen Z and millennial employee retention rates with employer efforts to create a diverse and inclusive working environment. Employees who feel happy with their employer’s diversity initiatives are more likely to stay in their jobs for at least five years.

Did you know? A diverse team of employees can bring several benefits to your company, including better employee engagement and retention, as well as enhanced decision-making.

Creating a DE&I program

Glassdoor’s survey showed that 63% of employees wish their employers made more efforts to increase their companies’ diversity. Implementing a diversity, equity and inclusion (DE&I) program can open the door to a broader pool of future employees. It can also show your current employees that you value and respect their need for diversity. It. Below are some tips to help you build a DE&I program for your company.

1. Set a vision.

It’s one thing to say you want a DE&I program at your company, but it’s another thing to know why. Your reason should go beyond wanting to reap the productivity and financial benefits of a diverse team. An authentic DE&I program starts with a true desire to elevate the voices of all types of employees in your organization.

Ask yourself: How does your team’s everyday work reflect your team’s structure? How does your company’s mission support diversity among your employees? Your vision and mission statements should clearly show what a diverse team represents within your organization.

2. Gather your leadership team.

Once you’ve identified a goal for your DE&I program, you’ll need to create a team that can oversee the program. This team should itself be diverse to reflect your goals and push your program in an effective direction. Consider hiring a manager or team member with a specialty in DE&I. After the leaders are chosen, the team can decide the specific roles each person will hold in developing and carrying the program.

3. Implement training practices.

With training practices, you can help your employees foster an environment that’s inclusive for all team members. These programs can include sessions on the importance of diversity, equity and inclusion in the workplace. They can also focus on the roles that people play in fostering an inclusive work culture and communication skills that promote healthy conversations. Other session ideas include how to recognize and address unconscious bias, dispelling stereotypes and respecting alternate worldviews.

4. Track your progress.

Once your DE&I program has been in place for a few months, assess your company’s progress. This way, you can see how well the program is moving your company toward a true culture of diversity, equity and inclusion.

Conducting employee surveys is an excellent way to evaluate your team’s perceptions on your work environment. Ask your team members to share their honest opinions. Based on their responses, you can see where you might need to make changes to your DE&I program and improve your company’s culture.

Tip

Ask your employees to share their opinions in employee surveys to gauge the progress your company has made toward an inclusive work culture. This approach can also help you identify new changes to move your company forward.

Creating a healthy work environment for all employees

Employees can have all the productivity, time management and collaboration skills they need, and it can still not be enough. Adding diversity to the mix can be beneficial for both your team members and your company’s outputs. 

A diverse work environment can help employees of all backgrounds and identities feel respected and comfortable working at your company. With an active and thorough DE&I plan, you can start fostering a culture that keeps current employees around – and invites others to join.

Transformers For Image Recognition At Scale

This article was published as a part of the Data Science Blogathon

Introduction

While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited. In the vision, attention is either applied in conjunction with convolutional networks or used to replace certain components of convolutional networks while keeping their overall structure in place. We show that this reliance on CNNs is not necessary and a pure transformer can perform very well on image classification tasks when applied directly to sequences of image patches.

How many words is an image worth?

A picture is worth a thousand words? It is not possible to fully describe a picture using words. But the papers tell us that an image worth 16×16 words.  In this blog, I gonna explain image recognition using transformers. It’s a really interesting paper published by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby  on 22 Oct 2023.  In this model, authors borrowed the most dominant attention model architecture in Natural Language Processing from the paper “Attention all you need” by Ashish. In this paper, they didn’t modify the attention layers in the transformer adore. The most important trick they do is to break an image into small patches of image( perhaps 16×16 as in the title). But how these patches are divided?

What’s special about this paper?

It is special because here we won’t use any Convolutional Network Layers. It works based on a standard transformer encoder to perform image processing tasks. Transformers doesn’t have any assumptions but CNN has a lot of assumptions about the image. In fact, Transformers are originally designed for NLP. I would recommend reading the article by Jay Alammar.

Using a transformer for image processing is more challenging, in NLP we pass a sequence of tokens as input but here we pass image patches as input, fitting an image to a transformer is really challenging, but in the paper, the image is divided into small patches and passed through the transformer.

It is a simple, scalable architecture, and performs state-of-art, especially when trained on large datasets such as JFT-300M. It’s also relatively cheap to pre-train the model. Transformers completely replaced LSTM in NLP.

Self-attention to images

How to apply self-attention to images? Just like in NLP, how one word pays attention to other words ( to find the relation between the words ). Using this concept to the image, the model transformer passes one pixel to attend to every other pixel.  For example, let us take an image of 4096 x 2160 pixels (DCI 4K), the computational cost too high and remember the attention layer tank capacity to the number of pixels tank capacity is high.

If you have a 1000x1M pixels image then the cost will be tremendously different. Let’s say 100×1000 pixels, the cost will be 100 times different in the self-attention layer.

How Vision Transformers work

Firstly, Split an image into patches. Image patches are treated as words in NLP. We have patch embedding layers that are input to transformer blocks. The sequence of pictures will have its own vectors. List of vectors as a picture because a picture is 16 times 16 words region transformer.

Vision Transformers (ViT)

As discussed earlier, an image is divided into small patches here let’s say 9, and each patch might contain 16×16 pixels.  The input sequence consists of a flattened vector ( 2D to 1D ) of pixel values from a patch of size 16×16. Each flattened element is fed into a linear projection layer that will produce what they call the “Patch embedding”.

Position embeddings are then linearly added to the sequence of image patches so that the images can retain their positional information. It injects information about the relative or absolute position of the image patches in the sequence.

An extra learnable ( class) embedding is attached to the sequence according to the position of the image patch. This class embedding is used to predict the class of the input image after being updated by self-attention.

The classification is performed by just stacking an MLP Head on top of the Transformer, at the position of the extra learnable embedding that we added to the sequence.

Patch embedding

The most important part of this paper is how to break down the image into patches. An image is represented as

                                               3D Image (X)  ∈  resolution  R HxWxC                                

reshape the 3D image into flattened 2D patches

                                              Patch Image ( Xp)  ∈  R Nx(P^2 . C)

Where sequence length N = H . W / P2  and (P, P) is the resolution of each image patch.

Each patch is a D dimension vector with a trainable linear projection.

[class] token

Similar to BERT’s [class] token, we prepend a learnable embedding to the sequence of embedded patches (z00 = xclass ).

z0 = [xclass; x1pE; x2pE; · · · ; xNp E] + Epos,    E ∈ R(P^2C)×D, Epos ∈ R(N+1)×D

Xclass is a class label and XNp is patch images N  ∈ 1 to n

Using the transformer encoder to pre-train we always need a Class label at the 0th position. When we pass the patch images as inputs we always need to prepend one classification token as the first patch as shown in the figure.

Positional encodings / Embeddings

Since Transformers need to learn the inductive biases for the task they are being trained for, it is always beneficial to help that learning process by all means. Any inductive bias that we can include in the inputs of the model will facilitate its learning and improve the results.

Position embeddings are added to the patch embeddings to retain positional information. In Computer Vision, these embeddings can represent either the position of a feature in a 1-dimensional flattened sequence or they can represent a 2-dimensional position of a feature.

1-dimensional:  a sequence of patches, works better

2-dimensional: X-embedding and Y-embedding

Relative: Define the relative distance of all possible pairs.

       Position Embedding formula as per attention mechanism

Model architecture

If we do not provide the transformer with the positional information, it will have no idea of the images’ sequence (which comes first and the images that follow it). This sequence of vector images is then fed into the transformer encoder.

every block, and residual connections after every chúng tôi MLP contains two layers with a GELU non-linearity. Finally, an extra learnable classification module (the MLP Head) is added to the transformer encoder, giving the network’s output classes.

GELU  is GAUSSIAN ERROR LINEAR UNIT

zℓ ` = MSA(LN(zℓ−1)) + zℓ−1,                ℓ   = 1 . . . L

zℓ = MLP(LN(zℓ `)) + zℓ `                           ℓ    = 1 . . . L

Hybrid architecture

The classification input embedding and position embeddings are added as described above.

E = [xclass; x 1pE; x 2pE; · · · ; x Np E] + Epos, E ∈ R (P^2 ·C)×D, Epos ∈ R (N+1)×D

Fine-tuning and Higher resolution

Supervised learning is used to do pretraining on large datasets ( e.x.; ImageNet). Pre-trained prediction head and attached a zero-initialized D × K feedforward layer, where K is the number of downstream classes ( e.x; 10 downstream classes in ImageNet ).

length.

The Vision Transformers can handle arbitrary sequence lengths (up to memory constraints), however, if sequence lengths too long the pre-trained position embeddings may no longer be meaningful.

2D interpolation of the pre-trained position embeddings is performed, according to their location in the original image. Note that this resolution adjustment and patch extraction are the only points at which an inductive bias about the 2D structure of the images is manually injected into the Vision Transformers.

Datasets

Dataset  Images

ImageNet 1000 1.3 Million

ImageNet-21k 21000 14 Million

JFT 18000 303 Million

The authors of the paper have trained the Vision Transformer on a private Google JFT-300M dataset containing 300 million (!) images, which resulted in state-of-the-art accuracy on a number of benchmarks ( Image below).

Model variants

Params 

ViT-Base 12 768 3072 12 86M

ViT-Large 24 1024 4096 16 307M

ViT-Huge 32 1280 5120 16 632M

Details of Vision Transformer model variants

The “Base” and “Large” models are directly adopted from BERT and the larger “Huge” models. For instance, ViT-L/16 means the “Large” variant with 16×16 input patch size. The transformer’s sequence length is inversely proportional to the square of the patch size, thus models with smaller patch size are computationally more expensive.

Comparison to state-of-the-art

Models – ViT-H/14 and ViT-L/16 – to state-of-the-art CNNs from the literature. Big Transfer (BiT) , which performs supervised transfer learning with large ResNets and Noisy Student, which is a large EfficientNet trained using semi-supervised learning on ImageNet and JFT300M with the labels removed. Currently, Noisy Student is the state of the art on ImageNet and BiT-L on the other datasets reported here. All models were trained on TPUv3 hardware, and less number of TPUv3-core-days ( 2500 TPU days ) taken to pre-train each of them.

Model size vs data size

ImageNet, Imagenet-21, and JFT-300 datasets are small, medium, and huge respectively. For the small dataset, Resnet ( Bit) really performed well but as we scale up the dataset ViT is performing very well. Vision Transformer performed very well on JFT-300 dataset. Localization is implemented on a very huge dataset during training. Localization like learning rate decay, dropout, and SGD with momentum.

ResNets perform better with smaller pre-training datasets but plateau sooner than ViT, which performs better with larger pre-training. ViT-b is ViT-B with all hidden dimensions halved.

Scaling Data Study

The above figure shows,  transfer performance versus total pre-training compute/computational costs. A few patterns can be observed.

performance (average over 5 datasets).

Second, hybrids slightly outperform ViT at small computational budgets, but the difference vanishes for larger models. This result is somewhat surprising since one might expect convolutional local feature processing to assist ViT at any size.

Third, Vision Transformers appear not to saturate within the range tried, motivating future scaling efforts.

Attention pattern analysis Self-supervised pre-training

significant improvement of 2% to training from scratch, but still 4% behind supervised pre-training. We leave the exploration of contrastive pre-training to future work.

Summary / Conclusion

Transformers solve a problem that was not only limited to NLP, but also to Computer Vision tasks.

Huge models (ViT-H) generally do better than large models (ViT-L) and wins against state-of-the-art methods.

Vision transformers work better on large-scale data.

Attention Rollouts are used to compute the attention maps.

Like the GPT-3 and BERT models, the Visual Transformer model also can scale.

Large-scale training outperforms inductive bias.

Convolutions are translation invariant, locality-sensitive, and lack a global understanding of images

So does this mean that CNNs are extinct? No! CNN still very much effective for tasks like object detection and image classification. As ViT works on large datasets, so we can make use of ResNet and EfficientNet models which are state-of-the-art convolutional architectures for all types (small, medium, and large )datasets. However, transformers have been a breakthrough in natural language processing tasks such as language translation and show quite a promise in the field of computer vision.

Please do share if you like my post.

Reference

Images are taken from Google Images and published papers.

The media shown in this article are not owned by Analytics Vidhya and is used at the Author’s discretion.

Related

Microsoft Office Online Aims At Google

NEW YORK/SAN FRANCISCO (Reuters) – Microsoft Corp launches an updated version of its Office software on Wednesday, aiming to keep its grip on the hugely profitable business application market while countering the challenge of free online alternatives from Google Inc.

The world’s largest software company is upgrading its popular Word, Excel, Outlook and PowerPoint applications, and rolling out its own online versions to keep up with the new class of mobile, web-connected users that have emerged since the last upgrade in 2006.

Most are expecting the Office franchise — which Microsoft says has 500 million users — to retain its dominance in the business world. But Office could be facing the beginning of an erosion of its “must-have” status.

“Every time Microsoft releases a new version of Office, they get a bump up in revenue,” said Toan Tran, an analyst at Morningstar. “But how big of an upgrade is this? They might have a harder time getting people to update.”

Microsoft is expected to trumpet a list of improvements on Wednesday, such as editing photos in Word, using video in PowerPoint, collaborating on documents and managing e-mail conversations in new ways.

But the most interesting facet is Microsoft’s move into the “cloud” — allowing users to manipulate documents stored on remote servers from anywhere — where Google has been setting the pace.

“They’re coming into our playing field,” said Dave Girouard, the Google executive leading the company’s charge into business applications. “They (Microsoft) have conceded that this is the future and now we think our products and services will get a lot more consideration.”

Google Docs — stripped down versions of Microsoft’s core programs — are available over the Internet with no need to download software. They are free for personal users and $50-per-user per year for companies. Google says it has picked up 25 million users since launching almost four years ago.

That is only a fraction of Microsoft’s 500 million, but is growing quickly.

“Word and Excel are pretty secure — Excel is embedded in an uncountable number of business processes, so that would be pretty hard to rip out,” said Tran.

But Microsoft faces more pressure in email and calendar programs, the areas most amenable to online and mobile use, where Google already has a strong foothold.

That could put the brakes on one of Microsoft’s most formidable profit engines.

Microsoft’s business division — which gets 90 percent of its sales from Office — averages around $2.8 billion profit per quarter. That is 47 percent of Microsoft’s total profit so far this fiscal year, second only to Microsoft’s core Windows franchise.

Businesses account for the vast majority of that. According to the latest data from tech research firm Forrester, 81 percent of companies are running Office 2007, compared with only 4 percent using Google’s online equivalent.

A Forrester poll indicates almost a third of existing Office users plan to upgrade to Office 2010 — which will be available to them from Wednesday — within 12 months.

Ordinary consumers get the chance to buy Office 2010 next month, at prices ranging from $119 to $499, depending on the level of sophistication.

It will take at least a year before it is clear whether Microsoft has a winner and the extent to which customers are migrating to online versions and lessening reliance on installed software.

“Google is not the threat that it will be once the ‘virtual desktop’ becomes a no-brainer,” said Richard Williams, an analyst at Cross Research, referring to the practice of accessing software over the Web. “That’s the time Microsoft really has to worry about.”

Scathing Night At The Colored Museum

Scathing Night at The Colored Museum Huntington stages Wolfe’s comic take on African American life

The cast of the Huntington Theatre Company production of The Colored Museum: Nathan Lee Graham (from left), Rema Webb, director Billy Porter, Shayna Small, Ken Robinson, and Capathia Jenkins. Photo by Nile Hawver/Nile Scott Shots

When it premiered off Broadway in 1986, The Colored Museum knocked the socks off audiences and critics with its 11 “exhibits”—sketches about being black in America that go for the jugular as well as the funny bone—and spare no one.

The Huntington Theatre Company’s production of the play, written by two-time Tony Award winner George C. Wolfe, comes at a time when the headlines, from Trayvon Martin to Ferguson, Mo., are holding a mirror up to America’s persistent racial divide. From “Cookin’ with Aunt Ethel” (“bake yourself a batch of Negroes”) to “Git on Board” (a Savannah-bound “celebrity slave ship”) to “The Last Mama-on-the-Couch Play” (which lampoons the Lorraine Hansberry classic Raisin in the Sun), the skits skewer racial stereotypes and draw comic fodder from their contemporary African American characters’ identities and values. (The play is commonly viewed as the inspiration for the Fox comedy sketch series In Living Color, which originally ran from 1990 to 1994.) The Colored Museum runs through April 5 at the Boston University Theatre.

With a cast of three women and two men playing multiple roles, the exhibits are “startling and hilarious,” says Huntington artistic director Peter DuBois, who describes the pairing of Wolfe and director Billy Porter as a dream.

Porter first read the play as a teenager, and he credits the scathing comedy with igniting in him “the fire of possibility” that set him on his creative journey. That journey “included stretching myself beyond what, up until then, I thought was possible for a little black gay boy from the ghetto,” he says. Porter comes to the production fresh from his Tony Award–winning performance in the Broadway hit musical Kinky Boots. His 2013 Tony for Best Actor in a Musical was one of a string of honors that included a Grammy, a Drama Desk, and an Outer Critics Circle Award. In addition to his long list of directing and acting credits, Porter recently released an album, Billy’s Back on Broadway.

BU Today asked Porter about the continued power and relevance of The Colored Museum and why he loves the play.

BU Today: In what ways do you expect The Colored Museum to resonate differently with today’s audience than with 1980s audiences? In what ways will it hit the same nerves?

Porter: The more things change, the more they stay the same. How it resonates is not really the point. The fact that it still does resonate now more than ever is what we are examining with the piece.

In the wake of Ferguson and other recent incidences of racism and racial discrimination, do you think the play will pack an added punch?

With all of the racial conflict that still exists in this country…it’s a sucker-punch to the gut.

Complicated topics are always easier to receive when couched in humor. It actually doesn’t feel like one is being forced at all. That’s the gift that humor gives us.

The exhibits are as poignant as they are satirical—which ones are likely to rattle the audience most?

I would have to say, “Git On Board,” the skit about the slave ship, and “The Gospel According to Miss Roj,” about a transgendered black man.

In what ways is the play a different experience for people of color and for whites, for young people with short memories and for older people with long ones?

I can’t really speak to what experiences anybody will have. As an artist I find the point is the fact that people are present. How they experience the work is not my business. My hope is that everyone leaves hopeful.

What are the challenges, and the rewards, of directing a play comprising a series of sketches, as opposed to a single plotline?

Saturday Night Live and shows of that nature have created an idea that sketches are somehow different from more traditional narratives. And while there is some truth in that, the reality is that every sketch stands alone as its own narrative, with a beginning, middle, and end. The reward is that you get 11 fully realized one-act plays for the price of one.

What does The Colored Museum have to say to a new generation?

Yes, things have changed. Yes, we have a black POTUS. But don’t get it twisted—we must not be complacent. We must not tire. Our greatest work is still ahead of us.

Do you think it matters whether audience members recognize the play’s use of parody (especially “The Last Mama-on-the-Couch Play”)?

No, because the piece is written so brilliantly it stands alone. It need not have any reference point for comprehension.

When did you first know of this play and what was its influence on you?

My high school drama teacher gave me the play back in 1986 when I was searching for monologues for college auditions. My life has never been the same.

How would you like people to feel when they leave the play?

Motivated. Empowered. Hopeful.

The Huntington Theatre Company’s production of The Colored Museum runs through April 5, 2023, at the BU Theatre, 264 Huntington Ave., Boston. Tickets may be purchased online, by phone at 617-266-0800, or in person at the BU Theatre box office. Patrons 35 and younger may purchase $25 tickets (ID required) for any production, and there is a $5 discount for seniors. Military personnel can purchase tickets for $15, and student rush tickets are also available for $15. Members of the BU community get $10 off (ID required). Call 617-266-0800 for more information. Follow the Huntington Theatre Company on Twitter at @huntington.

Explore Related Topics:

A Day At The Atom Smasher

A Day at the Atom Smasher BU grad student living and working at the CERN particle physics lab

Jeremy Love (GRS’10) is living and working at the CERN particle physics laboratory outside Geneva, Switzerland. Photo by Chris Berdik. (Below) The ATLAS control room celebrates the first complete pass of a proton.

On a chilly November morning, Jeremy Love (GRS’10) is standing in front of a large, wooden globe of a building outside Geneva, Switzerland, on the Meyrin campus of CERN (Conseil Européen pour la Recherche Nucléaire), an international physics research center also known as the European Organization for Nuclear Research. Love’s a skinny guy with a dark scruff of beard. He’s wearing jeans, a sweatshirt, and an olive backpack, and looking pretty casual for somebody who aims to help solve the mysteries of the universe by creating millions of mini–big bangs.

All that science will take place about 100 meters below ground in the Large Hadron Collider (LHC), enclosed in a 27-kilometer concrete tunnel that actually crosses the border between Switzerland and France. The LHC is the world’s most powerful atom smasher, a machine that physicists have been anticipating for decades. It was finally turned on September 10 — and then it promptly broke down, launching a laborious eight- to ten-month repair process.

Love arrived at CERN last June, and he’ll be here until the summer of 2010, part of a team of Boston University physicists attached to one of the LHC’s main particle detectors, a five-story bundle of trackers, calorimeters, magnets, and other instrumentation known as ATLAS. Inside the detector, protons will collide at nearly the speed of light, and physicists such as Love will sort through the debris looking for new particles that might help explain how the universe evolved.

The collisions in the LHC will reach energy levels seven times more powerful than any previous experiment has achieved. “And the higher energy you reach, the earlier in the universe you’re looking, because fractions of a second after the big bang, the universe’s energy was much more concentrated,” Love says. Heading off to a wooden building that sits atop ATLAS, he opens the door to reveal a cavernous room, crisscrossed with orange, green, and yellow girders, ventilation pipes, and a shoulder-high steel fence that rings two massive holes on either end of the concrete floor.

“You technically need a hard hat to be in here, but I think we’ll be all right,” says Love, ducking under some yellow caution tape and walking around to a short catwalk above one of the pits. Looking down into the guts of ATLAS (right), the top of the “muon system” — part of which was built at BU under the leadership of Steve Ahlen, a College of Arts and Sciences professor of physics — is visible. After other parts of the detector have tracked and trapped most of the charged particles that spray from a proton collision, this outermost system will measure the trajectories and energy of muons (like electrons, but heavier). And it’s this muon data that Love will eventually comb through for evidence of particles never before observed.

A siren sounds, and a yellow light flashes on a crane hovering over the opposite pit. They’re moving a piece of the detector, Love explains, part of the laborious repair work under way ever since a faulty electrical connection led to a major leak of liquid helium (used to keep the LHC colder than space) and forced the shutdown of the proton beam in late September. The experiment won’t start up again for at least six months, says Love, because everything must be fixed within the relatively tight confines of the LHC tunnel.

“It’s like a ship in a bottle,” he says. “To get to interior pieces, they have to move the outside pieces. So there’s this sort of intricate dance of how things are uncovered and repaired.”

Because the beam is shut down, Love spends a lot of his time down in the ATLAS experimental cavern 100 meters below, performing routine maintenance on the muon system, harnessed for safety as he tinkers dozens of feet off the ground. When the beam eventually comes back online and the proton collisions begin at a rate of thousands every second, Love will start analyzing the data, using specially designed software to sift through the collisions looking for the telltale signals of new particles. The LHC will produce enough data every year to roughly double all the information currently on the Internet.

The hope for that information is to help scientists discover what’s beyond the Standard Model of particle physics, which describes the simplest known particles (such as electrons and quarks) and the forces that act on them (such as electromagnetism and the force responsible for nuclear decay). For decades, this model has left particle physicists “unsatisfied,” as Love puts it — the model neglects gravity and offers no explanation for the imbalance of matter and antimatter, or “dark matter,” a phenomenon indicating that most of the universe’s mass is invisible, because it doesn’t emit light. In addition, the model’s explanation of why some particles have mass and others, such as photons, don’t, predicts the existence of a particle (known as the Higgs, named after the theorist who proposed it) that has yet to be observed.

In the last few decades, several theories have been proposed to explain what the Standard Model doesn’t. Each of them predicts the existence of new particles that LHC scientists will be hunting for in the years ahead. First, however, they’ll need to spend a lot of time getting the proton beams to curve just right and calibrating every bit of the particle detectors.

“It’s difficult, when you’re working on an experiment this big, not to get lost in the details and forget that there is a big picture,” says Love, who first became interested in cosmic questions when he read theoretical physicist Stephen Hawking’s A Brief History of Time. But even with the beam temporarily shut down, he is thrilled about working at CERN.

“If you’re not motivated by understanding the universe, it’s probably not going to keep you interested,” he says. “What keeps everybody here motivated is the drive to understand what nobody else understands.”

Chris Berdik can be reached at [email protected].

Explore Related Topics:

How To Win At Customer Lifecycle Marketing

Being a marketer can be exhausting. As if it wasn’t hard enough to attract new customers, now everyone’s talking about optimizing the entire customer lifecycle. (We’re tired just thinking about it!)

If you’re trying to understand customer lifecycle marketing, grab a coffee and read on: we’re here to remedy your marketing overwhelm. In this blog post, we explain why you need to work across the entire customer journey, and how to do it well.

What is customer lifecycle marketing?

Customer lifecycle marketing is about making every touchpoint between a customer and a brand more profitable.

A customer’s lifecycle describes their journey through the buying cycle. It includes every interaction they have with your brand along the way. And covers every marketing channel those interactions happen on.

Creating a customer lifecycle marketing strategy helps you optimize each and every customer interaction. The aim is to build a stronger relationship that will increase revenue.

Stages of customer lifecycle marketing

So, what are the different stages of the customer lifecycle and which tactics are most relevant at each stage?

Attraction

Creating content based on things people are searching for (that relate to your offering) is a great way to get discovered.

If your content meets search intentions well, you will attract organic traffic to your website. Make sure you meet informational needs rather than pushing your product.

Here’s a strong example of attractive content from Airbnb. It meets the needs of people searching for “things to do in Berlin”:

Creating content that’s found through search gets you in front of customers at the beginning of their journey to purchase. When they do start comparing purchase options, you’ll be top of mind.

Consideration 

The consideration stage of the customer lifecycle is when customers consider purchase options. You want to get them to form a preference for your brand.

Content types that help include welcome campaigns, ratings and reviews, and product descriptions.

A welcome campaign is an automated series of emails, triggered when someone subscribes. It enables you to position your products in bite-sized chunks and entice your customer to purchase.smart

Here’s a welcome email example from Office that we love. It’s benefit-led, it encourages social engagement, and it introduces popular brands:

Reviews and ratings are another useful tool you can use at the consideration stage. They act as social proof and help customers assess whether a product is a safe choice.

Conversion

Now you need to persuade your customers to purchase from you. Product recommendations, cart abandonment campaigns, and calls-to-action are your tools to do this.

Here’s an example of a tempting cart abandonment email from Kate Spade that would have enticed us back to buy:

Retention

The retention stage of the customer lifecycle is your chance to build loyalty and retain customers.

Try using loyalty schemes, product recommendations, or VIP discounts at this stage.

Advocacy Win-back

The win-back stage is when you employ tactics to win back any lapsed customers. Re-engagement email campaigns are an effective way to win-back those sleeping subscribers.

Reminding people what you have to offer is a pragmatic way to approach this. Here’s a great example of this tactic in action from Boden:

Why does post-purchase marketing matter?

Conversions are not the end of the customer’s lifecycle. They are only the midpoint.

These tactics help you shape post-purchase behaviour (the way a customer thinks, feels, and acts after they have bought something).

It’s common for customers to feel anxious after a purchase. They’ve just parted with their hard-earned money. Naturally, they question if they spent it well.

Post-purchase marketing is your tool to influence how your customer feels about their purchase. Done well, you can use post-purchase marketing to:

make customers feel good about their purchase

increase the likelihood your customer will buy from you again

How to win at post-purchase marketing

Email marketing is an ideal post-purchase marketing channel. With the right technology in place, you can set up automated emails that are triggered when a customer buys from you.

Here are some essential post-purchase email marketing ideas to get you started:

Thank you

Thanking your customers for their purchase builds rapport and makes them feel good about what they’ve just bought.

Here’s a bold and simple post-purchase thank you example from Abercrombie & Finch:

Refund policy and returns

Send an email to remind customers of your refund policy and clearly explain your returns policy. This helps to reduce post-purchase anxiety.

How-to guides

It’s frustrating when you can’t work out how to use something you just bought.

Email a how-to guide to help your customers use their purchase. This will improve their experience and reduces the risk that they’ll regret buying from you.

Product care tips

Some customers worry they’ll have to replace their purchase so soon it won’t be worth the money.

Send a care guide so your customers know how to look after their product. This reassures them it will last for a while and offer good value. 

Product recommendations

Email personalized recommendations for complementary products. This helps your customers get the most out of their purchase.

Here’s an example of how to do post-purchase product recommendations well from Best Buy:

Loyalty programme

Show your customers you value their custom by inviting them to become part of your loyalty programme. This is a great way to encourage frequent purchasing.

Product satisfaction feedback

Asking for feedback helps you improve your customer experience and shows customers you care.

Make sure you time feedback requests well to get the best response rate. This will depend on when the customer will have had a chance to use the product. 

Social media and user-generated content

Get customers to post lifestyle snaps of their new purchase to inspire others to buy. This helps you build a bank of user-generated visual content to use for future campaigns. 

Refer a friend

Offer your customers the chance to refer a friend in return for a discount. This helps you increase your customer base and boost your revenue.

Replenishment

Prompt customers to re-order before their product runs out with an automated replenishment email.

This ensures they aren’t left in the lurch (and that they don’t buy from a competitor).

Learn more by reading Pure360’s best practice guide to post-purchase marketing.

Takeaway

We hope this post has helped you learn about customer lifecycle marketing and given you some ideas to play with.

To implement the strategies we’ve explored, you’ll need the right marketing automation technology in place. That’s where we come in.

Update the detailed information about At&T Galaxy Note Android 4.1.2 Jelly Bean Update Is Official on the Moimoishop.com website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!