Alacer Group’s Velocity AML Suite Selected for Deployment in Prominent Bangladesh Banks

SEATTLE, WA–(Marketwired – Aug 13, 2015) – The Alacer Group, dedicated to helping business leaders accelerate performance, quality, productivity and profitability, today announced that its Velocity AML Suite has been chosen by three prominent banks in Bangladesh to improve AML case workflow and security. After an extensive search process, Mercantile Bank Limited, Social Islami Bank Limited (SIBL) and Meghna Bank selected Alacer’s Velocity AML Suite to ensure robust KYC/CIP onboarding, sanctions screening, payments interdiction and transaction profiling, all key operational areas that support local and international anti-money laundering efforts. The banks selected the Velocity solution for its ability to be quickly deployed, its robust reporting features and for the delivery of consistent and full compliance with AML regulatory requirements.

All implementations are currently underway through Alacer’s local technology partner, DataSoft Bangladesh Limited.

About Velocity
Alacer’s Velocity AML Suite is a powerful and scalable set of system modules specifically designed for facilitating AML case workflow and reporting. Velocity combines smart case management, sanctions screening, transactions monitoring and KYC/CDD functionalities. Each component in the suite is scaled to meet local and global regulatory requirements accurately, consistently and predictably. Velocity offers seamless integration with core banking systems and can be used on its own to perform KYC/CDD, alerts reviews, transaction monitoring, sanctions screening, real-time payments interdiction and periodic reviews. The system is purposefully designed to meet international regulatory compliance standards and its powerful reporting capabilities gives management insights on effectiveness, productivity and resource utilization.

About DataSoft Bangladesh Limited
DataSoft is a CMMi level 5 in progress, ISO 9001:2008 certified leading software product and services company in Bangladesh. Since 1998, DataSoft has established a successful track record of delivering innovative and cost-effective technical services to customers in the corporate and public sectors. Fortune 500 companies have selected DataSoft for mission critical public services, IT services such as e-Payment, Customs House Automation and Port IT operation (CTMS), automation of commercial banks, and the deployment of a SaaS model on a private cloud to more than 2200 microfinance banks. For more information, please visit the website at www.datasoft-bd.com.

About The Alacer Group
Headquartered in Bellevue, WA, with offices in New York and Dallas, The Alacer Group is a business consulting firm focused on four practice areas: big data, technology, finance and healthcare. Offering expert consultants around the globe, Alacer works with companies to resolve their needs quickly and smartly, resulting in positive outcomes. Alacer’s experienced anti-money laundering specialists design, develop, and implement effective process and system solutions for financial institutions. We help clients strengthen AML and compliance controls while improving efficiency through the application of the right mix of people, process and technology. For more information, please visit the website at www.alacergroup.com or find us on LinkedIn.

DFSS Drives Results for Financial Services Firms

Lean Six Sigma remains a popular and effective tool to improve efficiency in financial services operations. However, many financial institutions are finding that identifying and reducing incremental defect variability does not fully maximize the full spectrum of improvement opportunities. To achieve a larger return on investment, many organizations are turning to Design for Six Sigma (DFSS) to re-examine, radically recreate and often build new processes.

The Potential for Bigger Returns

Richard Paxton, co-founder and CEO of Seattle-area consulting firm The Alacer Group, said that the interest in DFSS as a preferred approach to process design and reengineering has grown significantly over the past few years. “In financial services, a typical DMAIC project can deliver somewhere between $500,000 and $600,000, whereas DFSS projects can be upward of $20 million,” he said. “The key is knowing where and how to apply the technique.”

Paxton has been in the financial services industry for more than 10 years. Previously, he was senior vice president of Operational Excellence at JPMorgan Chase, and senior vice president, Global Quality & Productivity Process Excellence at Bank of America.

“Ten years ago, many financial institutions were wary of using DFSS tools and thought of them primarily for application in manufacturing industries,” Paxton said. Additional apprehension centered around a misperception that massive technology deployments had to be part of DFSS solutions. Today, he added, DFSS is being embraced for its ability to provide robust designs for reengineering existing processes and products and developing new ones.

“Financial services today are very similar to large technology organizations; relying upon a complex network of interrelated processes and systems to deliver quality products and services for their customers,” Paxton said. “By taking an end-to-end view, and designing across functional boundaries, new levels of performance and quality can be realized. This is where DFSS is a perfect fit, and can help bridge the gap between business and technology.”

Similar to Lean and DMAIC, DFSS does take time, and moves with a unique feel and pace. Depending upon the scope and level of integration, a large-scale design and build can take 12 months to complete, Paxton said. His firm recently finished a project where they completely redesigned a core business process for a large financial institution. The project included the implementation of a new workflow environment, support processes, user interfaces and system integrations. From start to finish, the project took 18 months to complete. The payoff, though, was substantial – a flawless launch, increased capacity in the front office and significant back office efficiencies, Paxton said.

In another recent DFSS project, one of The Alacer Group’s large national banking clients found potential fraudulent activity and incoming suspicious activity reports (SARs) had increased 10-fold. Quality issues and high variability costs associated with these SARs had resulted in $80 million in annual vendor and contracting expenses.

Working with a local processing specialist, and with compliance and legal staff, a process team used the DFSS approach for process design to streamline SAR processing and optimize staffing models. The design was also built to accommodate future SAR types and regulatory changes.

As a result of the DFSS improvements, the client experienced $6 million per month in expense savings and was formally recognized by regulators for having a “best in class” SAR processing process. An additional $8 million in annual savings through a vendor was also identified, Paxton said.

Identifying Root Cause, Redesigning the Process

Another tactic for tackling financial services issues is to start with the Define, Measure and Analyze phases of DMAIC up front to identify root causes, and then apply DFSS to redesign the process to better meet the needs of the customer. “By understanding how and where defects in a process are generated, you can design out the opportunity for defects to occur and design in controls,” Paxton said.

For instance, Alacer recently analyzed the mortgage process for secured lending products at a Fortune 10 organization. The process was highly inefficient, cumbersome and lacked standardization, resulting in rework, increased risk exposure and loss of business. The project team began by analyzing the end-to-end process through the application of DMAIC and Lean to identify defects and waste. After the income verification process and credit guidelines were identified as the two most important improvements via DMAIC and Lean analysis, the DMADV (Define, Measure, Analyze, Design, Verify) roadmap of DFSS was then selected to design a new automated income verification process.

By adopting new support processes and modifying credit guidelines, the client’s end-to-end mortgage processing time was reduced by 30 percent and overall risk exposure was lowered, freeing up $500 million in capital for funding new loans.

“Over the years, the functions of banking have remained the same – sales, fulfillment, and service,” Paxton added. “What has changed is the way they are delivered. What will differentiate financial institutions in the future is their ability to deliver products and services customers want, how they want it and at a level of efficiency that makes sense for the business. Using DFSS, in conventional and unconventional ways, can help organizations deliver value for their customers and their bottom line.”

Comprehensive data on ionising radiation from Fukushima Daiichi nuclear power plant in the town of Miharu, Fukushima prefecture

Data related to radioactivity released from the Fukushima Daiichi Nuclear Power Plant (FDNPP) accident on 15 March 2011 gathered by residents of Miharu, Fukushima Prefecture, and by Tohoku University are presented. These data sets consist of (1) the earliest radiation monitoring by a Geiger counter in the town, (2) ratios of radioactivity between 132Te and 137Cs for a wide area between Fukushima and Tokyo, (3) radiation measurement of soil samples collected from 18 school grounds, and (4) external radiation exposure of 1400 students using OSL badges. By combining and analysing these various data sets, a curve for the cumulative total external exposure as a function of time, with 16 : 00 h on 15 March 2011 being time zero, is obtained. The average cumulative external dosage is estimated to be 10 mSv (σ = 4.2 mSv) over 10 years. In addition, the initiative that the residents of Miharu took in response to the FDNPP accident, which became known as The Misho Project (MP), is documented; in particular, the time at which the municipality instructed the immediate ingestion of iodine tablets by those under the age of 40, 13 : 00 h on 15 March 2011, is assessed.

Big data blocks gaming fraud

The explosion of online games has resulted in the creation of a new industry: in-game currency, currently valued at $1 billion in the U.S. alone. But game developers, particularly startups that are rising and falling on a single game, are losing significant revenue as savvy social game players figure out how to “game” the system by stealing currency rather than buying it.

Normally, network security flaws enabling fraudulent gameplay are identified and solved on the backend, often by using thousands of servers at high capital and operational expense. This represents a significant cost to the developer, as IT security personnel will play back every transaction and analyze it in order to determine who the cheaters are and how they are manipulating the game.

There is a better way to ensure the viability of a new company with a product requiring in-game currency. Online game developers have vast amounts of rich, unstructured data at their fingertips that they could use to help achieve revenue stability. The data delivers an understanding of each player and what they are doing while in the game. That same data could be manipulated to identify and stop fraudulent game activities in real time.

Recently, we had the opportunity to test this premise with one of the world’s largest casual online gaming companies, which was plagued by revenue leakage due to cheating players. By using analytics to examine and model the game, we could determine common player navigational paths. We could then design mathematical algorithms that could place parameters around the average player and predict the average way he or she would progress in the game. This information determined the threshold range for what would be acceptable play; any players that fell outside of the threshold range would be flagged as being potentially fraudulent. The game developer could then choose to immediately freeze accounts for those advancing too quickly — thereby plugging the revenue hole.

Oddly enough, not every game developer chooses to ban a fraudulent player when they are identified. Sometimes, after running a basic cost-benefit analysis, the cost for the fix is higher than the loss of revenue attributed to cheating players. However, taking a broader view of the analytics, it may be possible for the fraud analysis to more than pay for itself.

For example, rather than using the extracted data merely to fix security breaches and stop cheating players, a developer might also use the data to provide marketing and sales insights. Slower game players and one-time users, quickly identified through the data analysis, might be pinpointed and targeted for promotions to entice faster and more frequent play — increasing company revenues. Other monetization strategies might also be developed through a closer examination of how players navigate the game.

For those interested in how data analytics is used specifically within the gaming genre, a good primer is now available on Amazon: Game Analytics: Maximizing the Value of Player Data by Magy Seif El-Nasr, Anders Crachen, and Alessandro Canossa.

Article by Ed Sarausad on VentureBeat/GamesBeat

Mobile Telco Churn Infographic

Alacer-Mobile-Telco-Churn-970x1487

Recently, Ed Sarausad wrote an article for RCR Wireless that discussed how big data could be used to develop a plan to counter customer churn in the wireless industry.  We thought it might be interesting to create an infographic that would more clearly illustrate this growing problem: what causes a customer to leave a service provider, how quickly a customer tends to leave any given carrier and how costly high turnover can be for a company.

Reality Check: Using Profitability to Determine Churn Strategies

The American mobile telecommunications industry has matured from its early days of explosive growth to a saturated and fiercely competitive market where carriers are battling to simply retain existing customers. The industry now has one of the highest customer churn rates in business; as an example, in August 2012, T-Mobile US’ churn rate was 2.1%, which may seem miniscule until you factor in its millions of subscribers. In fact, voluntary customer churn has become so pervasive that many companies are redirecting vast sums of their marketing dollars from developing customer acquisition programs to devising churn management strategies. Gartner recently reported that marketing budgets for customer retention will increase by more than 50% by 2015.

Most often, service providers are reactionary in their efforts to address voluntary churn. But what if unstructured data could be used to develop a model that could identify the most lucrative customers and help develop a proactive program for ensuring their loyalty?

At the end of the day, the goal for any voluntary churn reduction program is revenue stabilization and augmentation. Given that, looking at a subscriber’s profitability is arguably a better model for developing a targeted marketing program than utilizing overall subscription numbers as a benchmark. Not all subscribers are equal in value; in fact, based on cost versus profitability, there are some customers that it might be wiser for a service provider to proactively lose from a revenue standpoint.

The Alacer Group tested this theory in a recent project for a U.S.-based tier-one service provider. By mining unstructured historical data to understand and track the variables that caused its customers to leave, we developed a model that could identify the most profitable customers vulnerable to future churning. Data was extracted from activity areas such as customer billing, network access and call detail records. We could then predict the level and likelihood of voluntary customer churn – giving the carrier a proactive roadmap for countering it to help maintain profitability.

The objective was to sift through enough data to develop a plan to proactively reach out to subscribers before they terminated their contracts. In taking a profit-centric approach to the task, we developed a profit-churn score for each customer by examining more than 70 different pieces of information on more than 70,000 subscribers. Through a binary logistic regression, we could predict with a 60% accuracy rate whether or not any given subscriber would churn. Each customer was then assigned to four different quadrants within a matrix that described the nature of the customer and helped determine actions to be taken for desired outcomes.

And this is where it got interesting: we learned that 35% of the carrier’s subscribers were more profitable than the average customer, and that they generated a whopping 61% of the carrier’s cumulative expected profit per month.

To identify these highly profitable subscribers, each received a combined score to assign them to a specific quadrant. For example, a customer with a profitability score of 7 and a churn score of 5 received a combined score of 75. Each two-digit score could then be read like a set of X,Y coordinates. The boundaries of the quadrants were created from carrier data set at break points driven by business needs. The churn axis breaks between 2 and 3 where there is a 50% or more probability of churning. The profitability axis breaks between 3 and 4 where the mean profitability was calculated.

By putting the churn and profitability models together, we end up with a useful framework for informed marketing actions. The profitability of each customer can be used to prioritize those who should receive immediate attention, and identify the high cost/low return customers where churn might actually be desirable. The carrier can now develop retention plans focused on its most highly profitable customers who are likely to churn; unprofitable customers are either allowed to leave or targeted for movement to a more profitable status.

The model identified several churn predictors and potential strategies that could be used to retain the carrier’s most valued customers. Examples include:

–Customer service calls. In this instance, the customer has contacted the customer service team. Through an integrated CRM program, the team can indicate that this customer is a retention priority. The profitable customers who are likely to churn would be at the top of the work queues for callbacks.

–Higher unique subscribers. This would indicate that the customer does a lot of business with the carrier and will regularly shop for volume discounts. The carrier could proactively discount additional lines and offer to consolidate any lines the customer has across multiple carriers.

–Children in household. The carrier could offer discounted family plans to encourage others in the household to join the existing plan instead of choosing a competing offer.

–High credit rating. These are customers who can easily take business elsewhere since they are not locked into a plan due to poor credit. For these customers, the carrier can proactively offer early discounts to extend a 24-month commitment.

A quantitative approach to combining churn and profitability models helped this carrier develop voluntary churn counter measures that effectively moved it from a reactive stance to a proactive one. It’s just one example of how big data can provide marketers with a more useful framework for developing proactive action plans that positively affect the bottom line.

Reality Check: Using profitability to determine churn strategies by Ed Sarausad
RCRWireless, June 11, 2013

 

 

What Gamers Can Teach Us About Fraud

Like it or not, gamification is on the rise. Since 2010, big consumer brands such as NBC, Walgreens and Southwest Airlines have all launched major projects that center around gaming. Why? Because games make previously dry subjects, such as corporate training, more fun and engaging.

But the gamification of the workplace comes with a few challenges, primarily in the areas of fraud and security. It’s possible to take a look at the casual and online gaming communities, where the concept has its roots, to identify potential pitfalls and solutions.

For example, how big is the problem of cheating in online games? Valve Corporation’s game platform, Steam, developed an anti-cheat solution in 2006 after it detected 10,000 cheating attempts in a single week. As of 2012, it had terminated more than 1.5 million accounts within the 60 games running on Steam.

My company, the Alacer Group, recently had the opportunity to work with one of the world’s largest casual online gaming companies to address revenue leakage experienced through fraudulent game play. The game itself has a simple premise: through social networking with friends, players can amass wealth and gain desired status. Additionally, the user may purchase in-game currency that enhances game play or adds new dimensions. Since the basic game is free, currency plays a key role in the company’s ability to generate revenue.

But when savvy social game players figure out a hack to advance without purchasing currency, there are two big repercussions. Not only does the game developer lose revenue, it loses additional players who get frustrated when they don’t advance in the game as quickly as their cheating counterparts. Normally, network security flaws involving game play are identified and solved on the back end. Often using thousands of servers at high capital and operational expense, IT security personnel will play back every transaction and analyze it in order to determine who the cheaters are and how they are manipulating the game.

This is not only time-consuming, it’s expensive. Here’s the solution we proposed: almost any online company, particularly a game developer, has vast amounts of unstructured data at its fingertips. What if that data could be manipulated to identify and stop fraudulent game activities in real time? Game analytics has emerged as one of the main resources for ensuring game quality, understanding consumer behavior, and maximizing the player experience, similar to how the film and televisionindustries are using big data to cater to viewers.

We used this concept to examine and model game play and to determine player navigation. It was then possible to design algorithms that could define the average player and predict the average way he or she would progress in the game. We could then define the threshold range for what would be acceptable play; anyone that fell outside of the threshold range would be flagged as a potentially fraudulent player. This allowed the game developer to immediately freeze accounts for those advancing too quickly — thereby plugging the revenue hole.

The average player profile was determined by examining data such as: friends on the social network that the player interacts with; the level and rapidity of gaming achievements; and most utilized game elements. From these data points and others, three specific player types emerged:

  1. “one-try wonders” who only try the game once
  2. “early defectors” who do not utilize the game for its expected lifetime
  3. “try hards” who will continue to play until they obtain a goal

Of these, the “try hards” are often the most valuable players in online games, as they stay on the site the longest, are the most likely to purchase game currency and often invite friends to join. However, this is also the group that is the most frustrated by cheaters. As one user complained to developers of the online gaming platform Kabam, “You are making good players either leave the game or resort to the same cheating to try and rid the game of the ones in question.”

Unstructured data can also be used to resolve other gamer and developer frustrations. Take a cheater who establishes fake accounts, where imaginary players lose at a game in order to artificially boost the cheater’s standings. We can use deep analytics to reveal and eliminate these fake accounts based on their win/loss ratios.

Perhaps more importantly, data can also pinpoint game “whales,” or frequent players who spend too much too quickly to maximize their advancement in the game. There is a strong probability that these whales are, in fact, using stolen credit cards. It may seem odd to non-game players that someone would risk criminal charges to purchase virtual items, but it does happen — a lot. Last year, a woman in Tennessee used a stolen credit card to purchase $4500 in virtual buildings, crops, and animals in the Facebook game, Farmville.

Specifically, our gaming client used the extracted data to close security holes. Every online game has vulnerabilities; the trick is to find them quickly before they can be massively exploited. The older server-based methodology of searching for fraud can take months; instead, we developed the algorithm for identifying a potentially fraudulent player and inserted into the network stream within weeks.

It should be pointed out that not every game developer chooses to ban a fraudulent player when she is identified. In fact, one company, Rockstar Games, chose to create a second version of the online game Max Payne 3 specifically for fraudulent players; anyone found cheating in the original game was quarantined to the cheaters version.

The experiences outlined within the casual gaming industry can certainly apply to other consumer and enterprise products who use gaming tactics in their companies and products. With the explosion of applications and online sites using elements of gamification, ranging from corporate training modules to calendars tracking fitness goals, developers will face the same security issues prevalent within the gaming industry: identity theft, rogue servers, cheating players and more. Unstructured data can be one of the tools used to limit fraudulent activity.

What Gamers Can Teach Us About Fraud by Ed Sarausad
Harvard Business Review, March 18, 2013

Proactively Adopting AML Best Practices

Most financial institutions have the same issue — the AML department has more work than it does resources. How does a bank solve this?

Financial institutions often adopt Anti Money Laundering (AML) best practices as an afterthought, usually in response to significant fines. But there are sound fiscal reasons why banks should be proactively establishing processes that can identify more efficient ways to conduct AML reviews before they are needed to comply with regulatory examinations. So why don’t they?

Most financial institutions have the same issue — the AML department has more work than it does resources. This is partially due to growth, but is highly correlated with the level of detail now part of regulatory reviews and the significant fines being levied on organizations that do not fully comply. These factors are the catalyst for maximizing both the efficiency and effectiveness of AML reviews, as well as upstream and downstream processes. By addressing these inefficiencies head on, financial institutions have been able to optimize their processes, which in turn have enabled them to increase capacity to handle incremental volume without materially increasing expenses.

To deliver this type of change requires cross-functional support as well as a proven approach to continuous improvement such as the Lean Kaizen method. Lean is a methodology that eliminates waste and boosts efficiency; Kaizen refers to continuous improvement. By applying Lean Kaizen tools and engaging the appropriate AML/compliance/business stakeholders, almost any AML process can be dramatically improved.

To be successful, key stakeholders must come together, baseline performance data, and document the current state process. During the Lean Kaizen session, defects and non-valued activities, or waste, are identified. And since all of the key stakeholders are in the room, process improvements can be brainstormed, and a future state or ‘to be’ process documented. Kaizens are a key tool in accelerating the pace of change. In one recent Lean Kaizen event, a global financial institution found that approximately 35 percent of the steps in the AML investigation process added no value. The steps were in place due to historical norms and the fact that the processes had not been updated or revisited despite a series of acquisition integrations, system changes and dramatic growth of the organization. By identifying the steps that did not add value, obtaining the requisite approval and updating the process (including procedures, manuals, etc.), the end-to-end process became 44 percent more efficient. This led to a commensurate reduction of cycle time and increased capacity to handle growth without adding new headcount.

In another real world example involving a large financial institution, the mood was optimistic in the kick off meeting, but the situation was daunting:

— There were backlogs in many areas

— Metrics were not clearly established and agreed to within the organization

— There were dozens of internal audit and regulatory findings which needed to be addressed

— Many of the AML processes and systems were not optimized

Despite these challenges, the team established a plan to tackle each of the core issues it was facing and set up a resource plan and operating rhythm to chart progress versus goals.

The first step was to identify metrics with Green/Yellow/Red definitions for each of the key processes within the AML department. The team then established a consistent process that facilitated weekly reporting in a standard, structured format. This enabled AML managers to provide their executives with weekly status updates on a consistent and timely basis. Within 6-8 weeks, each manager was reporting his or her metrics consistently and armed with data to intelligently and confidently speak to the performance of their processes. And, even if they were in Red or Yellow status, they could now more accurately communicate the data and develop a credible remediation plan.

The next step was to establish staffing models based on several months of historical data. This made annual resource planning much more effective and the managers who could ‘talk to their numbers’ became more successful in receiving approval for their resource requests.

Finally, process efficiency and productivity reviews for each critical AML function were performed. The goal was to streamline the processes as much as possible and make them easier – with fewer bottlenecks, fewer steps and insuring that the control points were in place where they needed to be. One year later, the organization was operating within their targets, had much greater control over their processes and had achieved a level of stability which increased the regulator’s and senior management’s confidence in growing the business.

All financial institutions should proactively evaluate each pillar of their AML programs. Waiting for internal audit or the regulators to do this during their examination can be even more time consuming and costly than performing this review before they arrive. Being able to articulate the pillars of your AML program is essential and it is helpful to be able to identify the improvements you are making to each pillar along with the rationale. Having this in place can lead to a more proactive discussion with auditors and regulators.

Granted, the AML function is complex, but when you really break it down, AML is a series of processes. Despite their complexity, AML processes can be analyzed and improved using Lean, Kaizen, Six Sigma and other process improvement and reengineering tools that can greatly improve their effectiveness and efficiency.

Proactively Adopting AML Best Practices at Bank Systems & Technology
March 15, 2012