<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>opinion &#8211; Presplay Cloud Tech*</title>
	<atom:link href="https://posts.presplay.cloud/category/opinion/feed/" rel="self" type="application/rss+xml" />
	<link>https://posts.presplay.cloud</link>
	<description>Trends and Inventions in technology</description>
	<lastBuildDate>Mon, 03 Jan 2022 15:51:24 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.8.5</generator>

 
	<item>
		<title>Executing the right migration strategy for your IT transformation.</title>
		<link>https://posts.presplay.cloud/executing-the-right-migration-strategy-for-your-it-transformation/</link>
					<comments>https://posts.presplay.cloud/executing-the-right-migration-strategy-for-your-it-transformation/#respond</comments>
		
		<dc:creator><![CDATA[Presplay]]></dc:creator>
		<pubDate>Thu, 22 Apr 2021 17:41:43 +0000</pubDate>
				<category><![CDATA[Cloud Database]]></category>
		<category><![CDATA[opinion]]></category>
		<guid isPermaLink="false">https://presplay.cloud/?p=22501</guid>

					<description><![CDATA[While this may seem academic, we have seen organizations spend millions with no results—all because their project lacked focus. In today’s market place, migrations are frequently referred to as relocations, consolidations, cloud migrations, or hybrid migration. The ability to differentiate between the various types of migrations is fundamental to communicating what you are trying to&#8230;]]></description>
										<content:encoded><![CDATA[<div class="row two-col-component two-col-component--66-33 two-col-component--padded-right">
<div class="col-sm-12 col-md-offset-1 col-md-10">
<div class="row row-equal">
<div class="column-component two-col-component__left">
<div class="component rich-text-component">
<p><img fetchpriority="high" decoding="async" class="alignnone size-medium wp-image-22454" src="https://posts.presplay.cloud/wp-content/uploads/2021/08/jens-weide-dd2xldo-08b81314-8fb8-4668-bd75-db58affd83e3-300x169.jpg" alt="" width="300" height="169" srcset="https://posts.presplay.cloud/wp-content/uploads/2021/08/jens-weide-dd2xldo-08b81314-8fb8-4668-bd75-db58affd83e3-300x169.jpg 300w, https://posts.presplay.cloud/wp-content/uploads/2021/08/jens-weide-dd2xldo-08b81314-8fb8-4668-bd75-db58affd83e3-1024x576.jpg 1024w, https://posts.presplay.cloud/wp-content/uploads/2021/08/jens-weide-dd2xldo-08b81314-8fb8-4668-bd75-db58affd83e3-768x432.jpg 768w, https://posts.presplay.cloud/wp-content/uploads/2021/08/jens-weide-dd2xldo-08b81314-8fb8-4668-bd75-db58affd83e3-1536x864.jpg 1536w, https://posts.presplay.cloud/wp-content/uploads/2021/08/jens-weide-dd2xldo-08b81314-8fb8-4668-bd75-db58affd83e3.jpg 1920w" sizes="(max-width: 300px) 100vw, 300px" /></p>
<p>While this may seem academic, we have seen organizations spend millions with no results—all because their project lacked focus.</p>
<p>In today’s market place, migrations are frequently referred to as relocations, consolidations, cloud migrations, or hybrid migration. The ability to differentiate between the various types of migrations is fundamental to communicating what you are trying to accomplish. It facilitates a more intelligent conversation within an organization, its stakeholders, executive sponsors, and vendors.</p>
<p>Migration is a general, overarching term describing the process of moving IT systems, workloads, applications, and their infrastructure from their present operating environment to one or more new target environments, e.g., private/public cloud, colocation facilities, edge location, and/or an owned and operated data center.</p>
</div>
</div>
<div class="column-component two-col-component__right">
<div class="component social-sharing-component ">Share This Insight</div>
<div class="component rich-text-component">
<h4><strong>What type of migration should you execute?</strong></h4>
<ul>
<li>Hybrid</li>
<li>Cloud</li>
<li>Consolidation</li>
<li>Colocation</li>
<li>Relocation</li>
</ul>
</div>
</div>
</div>
</div>
</div>
<div class="row two-col-component two-col-component--33-66 two-col-component--padded-left">
<div class="col-sm-12 col-md-offset-1 col-md-10">
<div class="row row-equal">
<div class="column-component two-col-component__left"></div>
<div class="column-component two-col-component__right">
<div class="component rich-text-component">
<p id="hybrid">
<h3>Hybrid</h3>
<p>The lines have been blurred between the different types of migrations.  Many organizations today require a hybrid enterprise environment where their infrastructure and systems operate across multiple IT landscapes.  These landscapes can include owned, leased and operated data centers, various private and public clouds (IaaS, PaaS, SaaS, DRaaS), and colocation facilities.  A hybrid enterprise can provision and move applications with great fluidity between various infrastructures to optimize performance, security, and cost. But, the most compelling reason for the hybrid enterprise is the speed with which an enterprise can respond to market opportunities and competitive threats.  The IT department is no longer the bottleneck for rolling out new services, it’s the accelerator, dramatically transforming the IT landscape.</p>
<p>Read more</p>
</div>
</div>
</div>
</div>
</div>
<div class="row two-col-component two-col-component--33-66 two-col-component--padded-left">
<div class="col-sm-12 col-md-offset-1 col-md-10">
<div class="row row-equal">
<div class="column-component two-col-component__right">
<div class="component rich-text-component">
<p id="cloud">
<h3>Cloud</h3>
<p>Cloud migrations move applications, workloads, systems, and infrastructure from a physical and/or virtual environment (p2c, v2c) to a private or public cloud provider, or it moves these systems in between cloud environments.</p>
<p>While not necessarily less expensive than physical infrastructure, cloud infrastructure can transform the enterprise with greater agility and scalability through:</p>
<ul>
<li>On-demand self service</li>
<li>Broader network access</li>
<li>Resource pooling</li>
<li>Rapid elasticity</li>
<li>Measured/metered services\</li>
</ul>
<p>Read more</p>
</div>
</div>
</div>
</div>
</div>
<div class="row two-col-component two-col-component--33-66 two-col-component--padded-left">
<div class="col-sm-12 col-md-offset-1 col-md-10">
<div class="row row-equal">
<div class="column-component two-col-component__right">
<div class="component rich-text-component">
<h3>Consolidations</h3>
<p>Data center consolidations reduce the number of physical data centers and/or the number of servers being used by either decommissioning legacy servers, repurposing servers, and/or the reducing servers via virtualization and/or hyper-converged technology.  The goal is to achieve a higher level of density and decreased footprint.  In many cases, the physical consolidation of facilities has similar attributes but also includes the sale of the facility, exit from a lease, and/or reuse of the space for other mission critical needs.  Here are several of the top benefits:</p>
<ul>
<li>Less hardware</li>
<li>Power savings</li>
<li>Smaller network</li>
<li>Lower facility costs</li>
<li>Reduced cooling loads</li>
<li>Fewer software licenses</li>
<li>Reduction in manpower</li>
</ul>
<p>Consolidations are typically driven by server sprawl, mergers and acquisitions, a demand for higher density levels via virtualization, and cost savings from power and cooling consumption.</p>
<p>Read more</p></div>
</div>
</div>
</div>
</div>
<div class="row two-col-component two-col-component--33-66 two-col-component--padded-left">
<div class="col-sm-12 col-md-offset-1 col-md-10">
<div class="row row-equal">
<div class="column-component two-col-component__right">
<div class="component rich-text-component">
<h3>Colocation/ Relocation</h3>
<p>Data center relocations move infrastructure from their current location to a new location. It involves only two data centers: the sending data center (source location) and the receiving data center (target location). These are accomplished in the following manner:</p>
<ul>
<li>Physical-to- physical (p2p or forklifts)</li>
<li>Physical-to-virtual (p2v)</li>
<li>Virtual-to-virtual (v2v)</li>
<li>Physical-to-cloud (p2c)</li>
<li>Virtual-to-cloud (v2c)</li>
</ul>
<p>Increasingly, enterprises are electing to collocate in lieu of building new data center facilities. The exceptions are organizations in highly regulated industries with compliance requirements such as banking, health-care, and public utilities.</p>
<p>The decision to build, buy, modernize or collocate a data center(s) is also influenced by an organization’s CapEx or OpEx posture.  If the goal is to maximize OpEx, then collocation is smart.  If not, CapEx compels a build, buy, or modernize approach depending on variables such as power consumption, cooling, footprint, cost, and timeline to operational readiness.</p>
<p>Relocation success starts with identifying the optimum target location; therefore, site selection is critical.</p>
</div>
</div>
</div>
</div>
</div>
]]></content:encoded>
					
					<wfw:commentRss>https://posts.presplay.cloud/executing-the-right-migration-strategy-for-your-it-transformation/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Migrate with confidence using a proven data center and cloud migration strategy.</title>
		<link>https://posts.presplay.cloud/migrate-with-confidence-using-a-proven-data-center-and-cloud-migration-strategy/</link>
					<comments>https://posts.presplay.cloud/migrate-with-confidence-using-a-proven-data-center-and-cloud-migration-strategy/#respond</comments>
		
		<dc:creator><![CDATA[Presplay]]></dc:creator>
		<pubDate>Mon, 22 Mar 2021 17:46:41 +0000</pubDate>
				<category><![CDATA[Cloud Database]]></category>
		<category><![CDATA[opinion]]></category>
		<category><![CDATA[Technology]]></category>
		<guid isPermaLink="false">https://presplay.cloud/?p=22503</guid>

					<description><![CDATA[The Migration Strategy Board helps organizations develop a data center or cloud migration plan. Designed like an interactive game board, it visualizes the five major phases of a data center and cloud migration project. It includes objectives, deliverables, tools, and additional tips for migrating assets, infrastructure, and services. A methodology specific to migrations helps ensure operational stability,&#8230;]]></description>
										<content:encoded><![CDATA[<p><img decoding="async" class="aligncenter" src="https://blogvaronis2.wpengine.com/wp-content/uploads/2018/05/data-migration-hero-1.png" alt="Data Migration Guide: Strategy Success &amp;amp; Best Practices | Varonis" /></p>
<p style="text-align: left;">The Migration Strategy Board helps organizations develop a data center or cloud migration plan. Designed like an interactive game board, it visualizes the five major phases of a data center and cloud migration project.</p>
<p style="text-align: left;">It includes objectives, deliverables, tools, and additional tips for migrating assets, infrastructure, and services. A methodology specific to migrations helps ensure operational stability, business continuity, and savings.</p>
<p>&nbsp;</p>
<p><strong>White Paper</strong></p>
<h4>The Cloud and Data Center Migration Methodology</h4>
<p>Discover the cloud and data center migration methodology behind the Migration Strategy Board. It is a proven process that migrates physical assets and apps with zero operational disruption. Get insight for your hybrid IT transformation and reduce IT costs.<span class="content-wrapper slash-hover-container link"> </span></p>
<div class="row two-col-component two-col-component--66-33">
<div class="col-sm-12 col-md-offset-1 col-md-10">
<div class="row row-equal">
<div class="column-component two-col-component__left">
<div class="component rich-text-component">
<h2>Migrate physical assets and applications with zero operational disruption.</h2>
</div>
</div>
</div>
</div>
</div>
<div class="row two-col-component two-col-component--66-33 two-col-component--padded-right">
<div class="col-sm-12 col-md-offset-1 col-md-10">
<div class="row row-equal">
<div class="column-component two-col-component__left">
<div class="component rich-text-component">
<p>David-Kenneth Group’s Cloud and Data Center Migration Methodology (DKGm) is a step-by-step, proven process for planning and executing successful migrations. It is intentionally designed to support hybrid IT operating environments. Organized into five phases, it includes a baseline of more than 80 deliverables and two migration-specific toolsets. Below is a summary of each phase.</p>
</div>
</div>
<div class="column-component two-col-component__right">
<div class="component rich-text-component">
<p><strong>Five Methodology Phases</strong></p>
<ul>
<li>Phase 1: Initiation</li>
<li>Phase 2: Discovery</li>
<li>Phase 3: Planning</li>
<li>Phase 4: Execution</li>
<li>Phase 5: Closeout</li>
</ul>
</div>
</div>
</div>
</div>
</div>
<div class="row two-col-component two-col-component--padded-right two-col-component--66-33">
<div class="col-sm-12 col-md-offset-1 col-md-10">
<div class="row row-equal">
<div class="column-component two-col-component__left">
<p>&nbsp;</p>
<div class="component rich-text-component">
<h2>Phase 1: Initiation</h2>
</div>
<div class="component rich-text-component">
<p>The Initiation Phase launches the project and includes key objectives that lay the groundwork for success. It establishes the business case and project charter, organizes the project team, and is where a high-level migration strategy begins to take shape.</p>
</div>
</div>
</div>
</div>
</div>
<div class="row two-col-component two-col-component--66-33 two-col-component--padded-left">
<div class="col-sm-12 col-md-offset-1 col-md-10">
<div class="row row-equal">
<div class="column-component two-col-component__left">
<div class="component rich-text-component">
<p>Your key objectives are:</p>
</div>
<div class="component rich-text-component">
<ul>
<li><strong>Define enterprise strategy:</strong> Identify and document how the cloud and the data center can be a driver of key strategic business initiatives. Also, provide guidelines for cloud adoption, future modernization and optimization initiatives, and other agile technologies.</li>
</ul>
</div>
<ul>
<li class="component rich-text-component"><strong>Develop business case: </strong>Identify the goals and objectives (business and technical) for cloud and data center migrations, key performance indicators, risks and benefits, and an ROI analysis for the project.  A well-crafted case facilitates institutional buy-in.</li>
<li class="component rich-text-component"><strong>Develop project charter: </strong>Document project objectives and constraints, what is in and out of scope, the resources involved, milestones, risks, dependencies, and high-level budget estimates. A well-executed charter helps to minimize internal politics.</li>
</ul>
<div class="component rich-text-component">
<ul>
<li><strong>Select migration approaches:</strong> Given your current data center, cloud, colocation, and edge environments, identify potential migration types, taking into consideration security and regulatory compliance. Also include in your evaluation the project teams’ skill sets, which can be a significant cost driver if skill gaps need to be addressed.</li>
</ul>
</div>
</div>
</div>
</div>
</div>
<div class="row two-col-component two-col-component--66-33 two-col-component--padded-left">
<div class="col-sm-12 col-md-offset-1 col-md-10">
<div class="row row-equal">
<div class="column-component two-col-component__left">
<ul>
<li class="component rich-text-component"><strong>Create a plan and schedule: </strong>Establish a preliminary timeline, schedule, resource list, and communication plan for stakeholders.  It should include the estimated resource costs, the number of sites to be moved, the number of move groups, and the major milestones.</li>
<li class="component rich-text-component"><strong>Kick-off migration: </strong>Review the project timeline, high-level project goals, set expectations, review roles and responsibilities, articulate the communication plan, and kick off your data center migration project with actionable next steps.</li>
</ul>
</div>
</div>
</div>
</div>
<div class="row two-col-component two-col-component--50-50 two-col-component--padded-right">
<div class="col-sm-12 col-md-offset-1 col-md-10">
<div class="row row-equal">
<div class="column-component two-col-component__left">
<div class="component rich-text-component">
<h3>Tips</h3>
<ul>
<li>Solid goals are key.</li>
<li>Set realistic expectations.</li>
<li>Enforce project priorities. They will be tested.</li>
<li>Communicate early and often with stakeholders.</li>
</ul>
</div>
</div>
<div class="column-component two-col-component__right">
<div class="component rich-text-component">
<p><img decoding="async" class="alignright" src="http://www.davidkennethgroup.com/-/media/images/32-testimonial/quote-circle.jpg?h=45&amp;w=45&amp;hash=5A1FC26895C3F7293B06303627E512982B6C2B16&amp;la=en" alt="Quote mark" /></p>
<h6 style="text-align: right;">The people in your organization and your customers are the <strong>most<br />
critical component</strong> of your migration. Communicating a solid plan will build<br />
confidence in the project and help address organizational concerns.</h6>
</div>
</div>
</div>
</div>
</div>
<div class="row">
<div class="column-component col-sm-12 col-md-offset-1 col-md-10 column-bottom-padding">
<div class="horizontal-line" style="text-align: right;"></div>
</div>
</div>
<div class="row two-col-component two-col-component--padded-right two-col-component--66-33">
<div class="col-sm-12 col-md-offset-1 col-md-10">
<div class="row row-equal">
<div class="column-component two-col-component__left">
<div class="component rich-text-component">
<h2>Phase 2: Discovery</h2>
</div>
<div class="component rich-text-component">
<p>The Discovery Phase identifies and documents your physical and virtual environment, inclusive of the application inventory and interdependencies. With a combination of auto and manual data collection methods, discovery identifies critical data at the business, application, and infrastructure layers.</p>
<p>The goal is to create a Master Asset Library (MAL), a repository for asset data and the authoritative source for the migration.</p>
</div>
</div>
</div>
</div>
</div>
<div class="row two-col-component two-col-component--66-33">
<div class="col-sm-12 col-md-offset-1 col-md-10">
<div class="row row-equal">
<div class="column-component two-col-component__left">
<div class="component rich-text-component">
<p>It includes the following objectives:</p>
</div>
<div class="component rich-text-component">
<ul>
<li><strong>Collect documentation:</strong> Collect and consolidate all existing documentation from each business unit, via your databases (CMDB/CMS), diagrams, facility documentation, contracts, etc.</li>
<li><strong>Deploy auto-discovery tool:</strong> Deploy an auto-discovery tool to capture data across your operating environment and into the cloud, from application workloads to data lurking at the edge.</li>
<li><strong>Map dependencies:</strong> Identify interdependencies across the business, application, and infrastructure layers (network, cloud, compute, and storage). Also capture traffic flows between server, storage, and network assets daily, weekly, monthly, quarterly, and during other seasonal events.</li>
</ul>
</div>
<div class="component rich-text-component">
<ul>
<li><strong>Distribute questionnaires:</strong> Use questionnaires to collect otherwise “undiscoverable” information from server and application owners, resolve gaps from auto-discovery, and understand data points that machines cannot uncover, i.e., maintenance windows, application owners, hard-coded IP addresses, security concerns, etc.</li>
<li><strong>Conduct interviews:</strong> Interview server and application owners to resolve gaps, address external partner and cloud connections, capture undocumented requirements, and address special situations brought to attention by the questionnaires.</li>
</ul>
</div>
</div>
</div>
</div>
</div>
<div class="row two-col-component two-col-component--66-33 two-col-component--padded-left">
<div class="col-sm-12 col-md-offset-1 col-md-10">
<div class="row row-equal">
<div class="column-component two-col-component__left">
<div class="component rich-text-component">
<ul>
<li><strong>Do manual inventory and site audit:</strong> Validate auto-discovered data and capture any assets that were not auto-discoverable. Gather information needed for the floor plan, rack elevation, and patch diagrams.</li>
<li><strong>Consolidate asset data:</strong> Consolidate auto- and manual-discovery data into a single repository of truth, aka the MAL, and keep it current with regular scans, including the integration of operational activities with your change control board.</li>
</ul>
</div>
</div>
</div>
</div>
</div>
<div class="row two-col-component two-col-component--50-50 two-col-component--padded-right">
<div class="col-sm-12 col-md-offset-1 col-md-10">
<div class="row row-equal">
<div class="column-component two-col-component__left">
<div class="component rich-text-component">
<h3>Tips</h3>
<ul>
<li>Don’t get stuck in discovery. If you document too much, you risk budget and schedule overruns.</li>
<li>Different situations require gathering different information. For example, some undocumented assets may only be known by select stakeholders.</li>
<li>Do not forget to gather business information, i.e., TCO, ROI, SLAs, service windows, refresh.</li>
</ul>
</div>
</div>
<div class="column-component two-col-component__right">
<div class="component rich-text-component">
<p><img decoding="async" class="alignright" src="http://www.davidkennethgroup.com/-/media/images/32-testimonial/quote-circle.jpg?h=45&amp;w=45&amp;hash=5A1FC26895C3F7293B06303627E512982B6C2B16&amp;la=en" alt="Quote mark" /></p>
<h6 style="text-align: right;">Organizations discover undocumented IT assets and dependencies 100%<br />
of the time and typically experience a <strong>+20% variance</strong> between what is known and what is discovered.</h6>
</div>
</div>
</div>
</div>
</div>
<div class="row two-col-component two-col-component--padded-right two-col-component--66-33">
<div class="col-sm-12 col-md-offset-1 col-md-10">
<div class="row row-equal">
<div class="column-component two-col-component__left">
<div class="component rich-text-component">
<h2></h2>
<h2>Phase 3: Planning</h2>
</div>
<div class="component rich-text-component">
<p>The Planning Phase is where the team determines dispositions and performs move-group planning. The project plan and schedule evolve into the master plan for execution.  During this phase a clear and consolidated picture of what will be necessary during the execution phase takes shape.</p>
</div>
</div>
</div>
</div>
</div>
<div class="row two-col-component two-col-component--66-33 two-col-component--padded-right">
<div class="col-sm-12 col-md-offset-1 col-md-10">
<div class="row row-equal">
<div class="column-component two-col-component__left">
<div class="component rich-text-component">
<p>Here are the objectives:</p>
<ul>
<li><strong>Application workload placement:</strong> Use your discovery data to categorize and prioritize workloads to assess best-fit placements. Be certain to carefully weigh the implications of any application infrastructure modifications, i.e., rehost, re-platform, repurchase, refactor, retire, and retain. Evaluate and select a cloud service provider(s), ensuring organizational requirements, regulatory requirements, data controls, and compliance laws (GDPR, SOX, PII) are supported.</li>
</ul>
</div>
<div class="component rich-text-component">
<ul>
<li><strong>Determine dispositions:</strong> Use discovery data and business drivers to select the end-state disposition of physical assets and applications where it can optimally function and service the business. A hybrid mix of dispositions is typical as some systems are not cloud-ready or cloud-fit.</li>
</ul>
</div>
<div class="component rich-text-component">
<ul>
<li><strong>Create move groups:</strong> Analyze data in the MAL to understand interdependencies between transactions, services, IP, systems, applications, networks, compliance regulations, and hardware. Establish the move groups and move dates, and for a quick win, identify migration pilot group(s) to test and optimize, ensuring that at least one of every migration disposition planned is included.</li>
</ul>
</div>
<div class="component rich-text-component">
<ul>
<li><strong>Modernize and optimize infrastructure:</strong>  Identify the current state of your infrastructure (as-is), the ideal future state (to-be), and the temporary transition state necessary between them. Align with business, budget, and timeline objectives to remediate necessary gaps and issues. Design (high-level and low-level designs), procure via a bill of materials (BOM), build, and configure your target state IT infrastructure environment.</li>
<li><strong>Modernize and optimize cloud: </strong> Ensure you are fully leveraging the agility and elasticity of the cloud by updating operational processes and procedures.</li>
</ul>
</div>
<div class="component rich-text-component">
<ul>
<li><strong>Formulate test plans:</strong> Migration testing must validate that the new infrastructure and cloud services continue to deliver the same services and performance. Tabletop, infrastructure, application, user acceptance, and latency testing are optimal.</li>
</ul>
</div>
<div class="component rich-text-component">
<ul>
<li><strong>Develop move plans:</strong> Sequence all the move groups on a timeline for the migration. Review and verify the move plan schedule with the business stakeholders to ensure no negative impacts. Don’t forget logistical and backup planning.</li>
</ul>
</div>
<div class="component rich-text-component">
<ul>
<li><strong>Plan data management:</strong> Compartmentalize the data in the cloud environment to control access, ensure compliance requirements are in place/configured, and mitigate security risks.</li>
<li><strong>Mitigate migration risks:</strong> Mitigate potential migration risks and issues with contingency dates, rollback plans, backup resources, a clear chain of command for quick decisions, enforced change control, and other elements that establish a robust fault-tolerant plan.</li>
</ul>
</div>
</div>
</div>
</div>
</div>
<div class="row two-col-component two-col-component--66-33 two-col-component--padded-right">
<div class="col-sm-12 col-md-offset-1 col-md-10">
<div class="row row-equal">
<div class="column-component two-col-component__left">
<div class="component rich-text-component">
<ul>
<li><strong>Prepare target data center locations:</strong>  Upgrade and install in advance the power, circuits, cooling, and cabling infrastructure. Ready all the new equipment in the target location and validate it performs to specification via ready-for-use (RFU) testing before any migrations can begin.</li>
<li><strong>Prepare target cloud provider:</strong> Configure the initial cloud setup for users, billing, resource hierarchy, access control, networking configuration, monitoring, security, compliance, and other basic set-up requirements before the first service is hosted to avoid future re-work and issues. Architect and design your target cloud environment much like you would your data center taking into account needs for high availability and redundancy. The cloud is not an excuse for poor design and is not a panacea with built-in redundancy. It must be designed.</li>
</ul>
</div>
</div>
</div>
</div>
</div>
<div class="row two-col-component two-col-component--padded-right two-col-component--50-50">
<div class="col-sm-12 col-md-offset-1 col-md-10">
<div class="row row-equal">
<div class="column-component two-col-component__left col-shared--background-white column-top-padding">
<div class="component rich-text-component">
<h3><strong>Tips</strong></h3>
<ul>
<li>Creative timing and groupings can simplify or reduce costs.</li>
<li>Procurement is often a critical path, especially for new circuits.</li>
<li>Have a fallback window and a plan B. Do not be afraid to use them.</li>
</ul>
</div>
</div>
<div class="column-component two-col-component__right col-shared--background-white column-top-padding">
<div class="component rich-text-component">
<p><img decoding="async" class="alignright" src="http://www.davidkennethgroup.com/-/media/images/32-testimonial/quote-circle.jpg?h=45&amp;w=45&amp;hash=5A1FC26895C3F7293B06303627E512982B6C2B16&amp;la=en" alt="Quote mark" /></p>
<p style="text-align: right;">Smart simplicity and creative problem solving can <strong>resolve<br />
challenges</strong>—like unsupported assets, security gaps,<br />
and more—without disrupting the timeline or the budget.</p>
</div>
</div>
</div>
</div>
</div>
<div class="row">
<div class="column-component col-sm-12 col-md-offset-1 col-md-10 col-shared--background-white column-bottom-padding column-left-padding column-right-padding">
<div class="horizontal-line"></div>
</div>
</div>
<div class="row two-col-component two-col-component--66-33 two-col-component--padded-right">
<div class="col-sm-12 col-md-offset-1 col-md-10">
<div class="row row-equal">
<div class="column-component two-col-component__left">
<div class="component rich-text-component">
<h2>Phase 4: Execution</h2>
</div>
<div class="component rich-text-component">
<p>A data center or cloud migration is 75 percent planning and 25 percent execution. Thorough planning and preparation are your best protection against migration risks. Pre-migration testing, pilot migrations, contingency rollback planning, and detailed cutover run books support the final go or no-go decision as the gatekeeper for execution readiness. Post-testing, acceptance criteria, and solid issue tracking ensure you do not walk away from the scene with a hidden bomb still ticking.</p>
</div>
</div>
</div>
</div>
</div>
<div class="row two-col-component two-col-component--66-33 two-col-component--padded-right">
<div class="col-sm-12 col-md-offset-1 col-md-10">
<div class="row row-equal">
<div class="column-component two-col-component__left">
<div class="component rich-text-component">
<p>Below are the crucial steps to the Execution Phase:</p>
<ul>
<li><strong>Complete pre-migration tests:</strong> Utilize pre-migration testing as a baseline for post-migration performance, and identify errors or performance issues already present in the system. Course-correct before initiating a migration.</li>
<li><strong>Execute data center pilot migration(s):</strong>  Use pilot migrations as an opportunity to build support and trust internally by migrating low-complexity, low-risk workloads. Utilize the same key steps for large groups, optimizing processes and procedures, and documenting lessons learned. Similarly, migrate non-production environments before their production equivalent to refine and monitor the success of the cutover process.</li>
<li><strong>Execute cloud pilot migrations: </strong>Test migration tools for status tracking, performance evaluation, and cost-effectiveness against the metrics defined in the business case. Map out any solutions for bottlenecks before they become an issue with a production system</li>
<li><strong>Run data center backups:</strong> Establish and execute a backup plan to ensure a full restore is possible in the event of total equipment failure or data loss during migration.</li>
<li><strong>Run cloud backups: </strong>Confirm that business-critical data and applications in the cloud are recoverable and within the allowable timeframe.</li>
<li><strong>Decide go/no-go:</strong> Make a go/no-go decision about whether to execute the migration. This decision is made jointly by a designated team of decision-makers comprised of core project team members.</li>
</ul>
</div>
<div class="component rich-text-component">
<ul>
<li><strong>Execute migration:</strong> Execute the migration based on the migration plan and run book. Enforce a clear chain of command and responsibilities, capturing and managing any issues as they emerge.</li>
</ul>
</div>
</div>
</div>
</div>
</div>
<div class="row two-col-component two-col-component--66-33 two-col-component--padded-right">
<div class="col-sm-12 col-md-offset-1 col-md-10">
<div class="row row-equal">
<div class="column-component two-col-component__left">
<div class="component rich-text-component">
<ul>
<li><strong>Complete post-migration testing:</strong> Run infrastructure test scripts on storage devices and servers to ensure a healthy connected environment, then hand off for business user acceptance testing of databases and applications. Finally, closely monitor applications in production for the next 24 hours for any necessary performance tuning.</li>
<li><strong>Conclude support period:</strong> Keep the migration support desk open to ensure full stabilization of the system and to address issues that can surface a week or so post-migration.</li>
</ul>
</div>
</div>
</div>
</div>
</div>
<div class="row two-col-component two-col-component--50-50 two-col-component--padded-right">
<div class="col-sm-12 col-md-offset-1 col-md-10">
<div class="row row-equal">
<div class="column-component two-col-component__left">
<div class="component rich-text-component">
<h3>Tips</h3>
<ul>
<li>Bigger moves can be easier than small ones.</li>
<li>Migrate from steady-state to steady-state.</li>
<li>Do not underestimate the test planning effort.</li>
</ul>
</div>
</div>
<div class="column-component two-col-component__right">
<div class="component rich-text-component">
<p><img decoding="async" class="alignright" src="http://www.davidkennethgroup.com/-/media/images/32-testimonial/quote-circle.jpg?h=45&amp;w=45&amp;hash=5A1FC26895C3F7293B06303627E512982B6C2B16&amp;la=en" alt="Quote mark" /></p>
<h6 style="text-align: right;">Triggering your contingencies and maintaining control can make<br />
a <strong>critical difference</strong> when the plan does not go as planned.</h6>
</div>
</div>
</div>
</div>
</div>
<div class="row two-col-component two-col-component--padded-right two-col-component--66-33">
<div class="col-sm-12 col-md-offset-1 col-md-10">
<div class="row row-equal">
<div class="column-component two-col-component__left">
<div class="component rich-text-component">
<h2></h2>
<h2>Phase 5: Closeout</h2>
</div>
<div class="component rich-text-component">
<p>After completing the complex undertaking of migration, the Closeout Phase can catch organizations by surprise with hidden risks and expenses, and it can take longer than anyone expects.</p>
</div>
</div>
</div>
</div>
</div>
<div class="row two-col-component two-col-component--66-33 two-col-component--padded-right">
<div class="col-sm-12 col-md-offset-1 col-md-10">
<div class="row row-equal">
<div class="column-component two-col-component__left">
<div class="component rich-text-component">
<p>Below are the objectives:</p>
<ul>
<li><strong>Decommission assets &amp; space:</strong> Track the inventory of remaining assets and develop a strategy for decommissioning them. Assets may be liquidated, retired, or resold. Review your lease agreement and building and facility services contracts. Establish a termination plan for services at the previous facility, restore and clean the site, and protect or destroy any sensitive materials.</li>
<li><strong>Establish operations:</strong>  Ensure that regular operations’ staff are familiarized with the new environmental responsibilities and skill gaps are remediated, and fully return operational activities to them. Also, integrate DevOps and update the governance model including infrastructure and data standards for the cloud. Define the customer interface for cloud services. To ensure business continuity, review DR plans with the newly transformed environment.  Establish strong governance for cloud capabilities and attributes with future cloud applications, services, and monitoring capabilities in mind.</li>
</ul>
</div>
<div class="component rich-text-component">
<ul>
<li><strong>Optimize infrastructure:</strong> Continuously monitor, measure, and improve procedures and procedures to ensure responsiveness to business dynamics and needs. Do not hesitate to terminate cloud instances that are not fully utilized or no longer serve their purposes(s). Remember to identify KPIs as a baseline for ongoing infrastructure optimization, modernization, and cost control comparison.</li>
<li><strong>Optimize cloud:</strong> Monitor cloud utilization and costs across internal and external platforms and develop a capacity-planning process that keeps the cloud rightsized. For ongoing cleanup and containment, establish audit levels and a cloud adoption framework.</li>
</ul>
</div>
</div>
</div>
</div>
</div>
<div class="row two-col-component two-col-component--66-33 two-col-component--padded-right">
<div class="col-sm-12 col-md-offset-1 col-md-10">
<div class="row row-equal">
<div class="column-component two-col-component__left">
<div class="component rich-text-component">
<ul>
<li><strong>Update contracts and SLA agreements:</strong> Manage relationships with stakeholders as you update contracts and SLA agreements, and ensure all details are reflected in new agreements.</li>
<li><strong>Conduct debrief &amp; close project:</strong> Change inevitably brings about lessons learned. Review the project with stakeholders and capture, document, and mitigate any issues. Take this opportunity to also document new design considerations discovered during the migration project. Review key success metrics, document findings, show savings, and package these data points into a debriefing report. Invite the sponsors to review and accept the deliverables.</li>
</ul>
</div>
</div>
</div>
</div>
</div>
<div class="row two-col-component two-col-component--50-50 two-col-component--padded-right">
<div class="col-sm-12 col-md-offset-1 col-md-10">
<div class="row row-equal">
<div class="column-component two-col-component__left">
<div class="component rich-text-component">
<h3>Tips</h3>
<ul>
<li>If the disaster recovery of primary critical systems is not working well, now is the time to correct it while the right team is assembled.</li>
<li>Absorb valuable information gained from the migration project into the organization and tools.</li>
<li>Publish lessons learned as a record for future migration projects.</li>
</ul>
</div>
</div>
<div class="column-component two-col-component__right">
<div class="component rich-text-component">
<p><img decoding="async" class="alignright" src="http://www.davidkennethgroup.com/-/media/images/32-testimonial/quote-circle.jpg?h=45&amp;w=45&amp;hash=5A1FC26895C3F7293B06303627E512982B6C2B16&amp;la=en" alt="Quote mark" /></p>
<h6 style="text-align: right;">Done right, significant <strong>value and expense</strong> can be saved in the closeout phase.</h6>
</div>
</div>
</div>
</div>
</div>
<div class="row">
<div class="column-component col-sm-12 col-md-offset-1 col-md-10 column-bottom-padding">
<div class="component rich-text-component">
<h2></h2>
<h2></h2>
<h2>The Bottom Line about Cloud and Data Center Migrations</h2>
<p>A data center and cloud migration can fundamentally transform how you deliver IT services and yield an attractive return on investment for IT and business operations. But, you need a rock-solid migration plan, tailored to your operating environment, to get you there.</p>
<div>A methodology specific to migrations will help you create the plan and achieve the transformative results you are looking for <em>while </em>ensuring operational stability in the process.</div>
</div>
</div>
</div>
]]></content:encoded>
					
					<wfw:commentRss>https://posts.presplay.cloud/migrate-with-confidence-using-a-proven-data-center-and-cloud-migration-strategy/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>App Testing Must Evolve Within the DevOps Pipeline</title>
		<link>https://posts.presplay.cloud/app-testing-must-evolve-within-the-devops-pipeline/</link>
					<comments>https://posts.presplay.cloud/app-testing-must-evolve-within-the-devops-pipeline/#respond</comments>
		
		<dc:creator><![CDATA[Presplay]]></dc:creator>
		<pubDate>Sun, 21 Mar 2021 19:45:13 +0000</pubDate>
				<category><![CDATA[opinion]]></category>
		<guid isPermaLink="false">https://presplay.cloud/?p=19376</guid>

					<description><![CDATA[BY: FRANK As the practice of DevOps evolves, so do the supporting tasks; hopefully in such a way that they introduce increased efficiency and automation to accelerate development and deployment pipelines. However, one specific process still remains a speed bump on the road to DevOps acceleration: the process—or, more specifically, the chore—of testing. Naturally, testing is&#8230;]]></description>
										<content:encoded><![CDATA[<p><img decoding="async" src="https://qainfotech.com/wp-content/uploads/2020/01/software-test-automation_c.jpg" alt="Software Test Automation - Improving Testing Efficiency | QA InfoTech" /></p>
<p>BY: <span class="entry-author"><span class="entry-author-name">FRANK </span></span></p>
<p>As the practice of DevOps evolves, so do the supporting tasks; hopefully in such a way that they introduce increased efficiency and automation to accelerate development and deployment pipelines. However, one specific process still remains a speed bump on the road to DevOps acceleration: the process—or, more specifically, the chore—of testing.</p>
<p>Naturally, testing is an undeniably important component of quality assurance. However, introducing efficiency into a process that dictates quality assurance can accelerate development and improve outcomes, especially when it comes to application testing.</p>
<p>“Application testing should never be done in a vacuum, or, worse, a silo,” said <a href="https://devops.com/the-role-of-soft-skills-in-building-strong-devops-leads/" target="_blank" rel="noopener">Sara Faatz</a> director, developer relations at <a href="https://www.progress.com/" target="_blank" rel="noopener">Progress Software</a>. “The DevOps process can benefit from accelerating the testing process, introducing automation and embracing teamwork.”</p>
<p>Faatz makes some valid points; historically, application testing was treated as either a standalone or separate process, and often was performed by non-programmers following some type of script. “We discovered early on that creating synergy between testers and developers was an important concept for accelerating development, while fostering teamwork,” said Ramiro Millan, director, product development, <a href="https://docs.telerik.com/teststudio/welcome" target="_blank" rel="noopener">Test Studio</a>. “Addressing the trifecta of acceleration, automation and teamwork challenges actually requires adopting a different mindset when it comes to testing, especially when it comes to automation and communication.”</p>
<p>However, solving those problems requires adopting different procedures, as well as deploying tools to streamline processes and introduce automation. That said, there are ancillary challenges that also have to be addressed. For example, there is often a disconnect between the cultures of QA teams and development teams. Here, QA teams need to overhaul their processes and introduce more agile practices. In other words, the time is now to identify new practices and methods that allow QA teams to be creative and innovative while helping to test the software created by DevOps.</p>
<p>“Automation is becoming hyper-critical for testing. However, it is not just automation alone, but also the ability to run tests in the background and not hijack control of the console or the machine,” Millan said. “Automation should also bring with it the ability to concurrently run multiple tests in the background, which, in turn, accelerates the testing processes.”</p>
<p>That makes a great deal of sense. In many DevOps shops, QA has become a speed bump in the <a href="https://devops.com/?s=CI%2FCD" target="_blank" rel="noopener">CI/CD pipeline</a>, meaning that developers often have to wait for testing processes to complete before they can deploy anything. What’s more, developers are often disconnected from the status of testing.</p>
<p>“Communication between developers and testers is taking on renewed importance. Using tools that can hand off statuses between teams, while also enabling different members to participate in the processes, proves to accelerate execution times,” Faatz said. “The ultimate goal here is to bring forth faster delivery, less maintenance and more stability. I think stability is a big part of the equation. If we can automate those day-to-day, repetitive tasks, we garner that additional stability.”</p>
<p>Although testing is a broad subject, many of the lessons offered can have a beneficial impact on the overall process. Currently, many vendors are focused on subsets of the testing process, such as headless browser testing, PDF validation, UI/UX testing, backend integration and so on. This can be an indicator of just how complex the QA process can become if unified tools and automation are not introduced into the process.</p>
<p>“The trick, here, is to pick a tool that does not replace peoples’ processes, but transforms manual QA processes into something that can leverage automation,” Millan said. “Automation brings about consistency, and consistency brings forth improved results.”</p>
<p>The lessons learned indicate that testing must evolve to keep up with the DevOps model, and also suggest that DevOps is pushing QA teams to adopt new methods of testing and interaction. The only remaining question is how quickly QA can better integrate into DevOps, and eliminate the deployment speed bumps.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://posts.presplay.cloud/app-testing-must-evolve-within-the-devops-pipeline/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Incident Management Process: 5 Steps to Effective Resolution</title>
		<link>https://posts.presplay.cloud/incident-management-process-5-steps-to-effective-resolution/</link>
					<comments>https://posts.presplay.cloud/incident-management-process-5-steps-to-effective-resolution/#respond</comments>
		
		<dc:creator><![CDATA[Presplay]]></dc:creator>
		<pubDate>Tue, 22 Dec 2020 18:41:20 +0000</pubDate>
				<category><![CDATA[Cloud Database]]></category>
		<category><![CDATA[CRM]]></category>
		<category><![CDATA[opinion]]></category>
		<guid isPermaLink="false">https://presplay.cloud/?p=22509</guid>

					<description><![CDATA[An incident management process is a set of procedures and actions taken to respond to and resolve critical incidents: how incidents are detected and communicated, who is responsible, what tools are used, and what steps are taken to resolve the incident. Incident management processes are used across many industries, and incidents can include anything from&#8230;]]></description>
										<content:encoded><![CDATA[<p><img decoding="async" class="aligncenter" src="https://www.engagebay.com/blog/wp-content/uploads/2019/01/crm-database-maintenance.png" alt="CRM Database: Overview, Structure, Strategies &amp;amp; Maintenance Tips" /></p>
<p>An incident management process is a set of procedures and actions taken to respond to and resolve critical incidents: how incidents are detected and communicated, who is responsible, what tools are used, and what steps are taken to resolve the incident.</p>
<p>Incident management processes are used across many industries, and incidents can include anything from IT system failure, to events requiring the attention of healthcare professionals, to critical maintenance of physical infrastructure.</p>
<p><b>In this article, you will learn:</b></p>
<ul>
<li>Why Is Incident Management Important?</li>
<li>What Is an Incident Management Process?</li>
<li>The Five Steps of Incident Resolution</li>
</ul>
<ol>
<li>Incident Identification, Logging, and Categorization</li>
<li>Incident Notification &amp; Escalation</li>
<li>Investigation and Diagnosis</li>
<li>Resolution and Recovery</li>
<li>Incident Closure</li>
</ol>
<ul>
<li>Tips for Improving Your Incident Management Process</li>
<li>Train and Support Employees</li>
<li>Set Alerts That Matter</li>
<li>Prepare Your Team for On-Call</li>
<li>Establishing Communication Guidelines</li>
<li>Streamline Change Processes</li>
<li>Improve Systems with Lessons Learned</li>
<li>How to Use Alerting to Improve Your Incident Management Process</li>
<li>Define Your Monitoring and Alerting Strategy</li>
<li>Go Beyond Ticketing Systems</li>
<li>Create a Minimal Runbook</li>
</ul>
<h4><strong>Why Is Incident Management Important?</strong></h4>
<p>Incident management refers to a set of practices, processes, and solutions that enable teams to detect, investigate, and respond to incidents. It is a critical element for businesses of all sizes and a requirement for meeting most data compliance standards.</p>
<p>Incident management processes ensure that IT teams can quickly address vulnerabilities and issues. Faster responses help reduce the overall impact of incidents, mitigate damages, and ensure that systems and services continue to operate as planned.</p>
<p>Without incident management, you may lose valuable data, experience reduced productivity and revenues due to downtime, or be held liable for breach of service level agreements (SLAs). Even when incidents are minor with no lasting harm, IT teams must devote valuable time to investigating and correcting issues.</p>
<p>A few of the most important benefits of implementing an incident management strategy include:</p>
<ul>
<li>Prevention of incidents</li>
<li>Improved mean time to resolution (<a href="https://blog.viibe.co/what-is-mean-time-to-repair/" target="_blank" rel="nofollow noopener">MTTR</a>)</li>
<li>Reduction or <a href="https://blog.viibe.co/machine-downtime/" target="_blank" rel="noopener">elimination of downtime</a></li>
<li>Increased data fidelity</li>
<li>Improved customer experience</li>
</ul>
<p>Another benefit of incident management practices is an overall reduction in costs. According to a study by <a href="https://blogs.gartner.com/andrew-lerner/2014/07/16/the-cost-of-downtime/" target="_blank" rel="noopener noreferrer">Gartner</a>, system or service downtime can cost organizations $300k per hour. Additionally, regulatory fines and loss of customer trust can have significant financial impacts. With incident management, organizations may have to invest more upfront but they can avoid significant costs later on.</p>
<h4><strong>What Is an Incident Management Process?</strong></h4>
<p>Incident management processes are the procedures and actions taken to respond to and resolve incidents. This includes who is responsible for response, how incidents are detected and communicated to IT teams, and what tools are used.</p>
<p>When designed well, incident management processes ensure that all incidents are addressed quickly and that a certain quality standard is maintained. Processes can also help teams improve their current operations to prevent future incidents.</p>
<p>&nbsp;</p>
<h4><strong>The Five Steps of Incident Resolution</strong></h4>
<p>There are five standard steps to any incident resolution process. These steps ensure that no aspect of an incident is overlooked and help teams respond to incidents effectively.</p>
<p><strong>1. Incident Identification, Logging, and Categorization</strong></p>
<p>Incidents are identified through user reports, solution analyses, or manual identification. Once identified, the incident is logged and investigation and categorization can begin. Categorization is important to determining how incidents should be handled and for prioritizing response resources.</p>
<p><strong>2. Incident Notification &amp; Escalation</strong></p>
<p>Incident alerting takes place in this step although the timing may vary according to how incidents are identified or categorized. Additionally, if incidents are minor, details may be logged or notifications sent without an official alert. Escalation is based on the categorization assigned to an incident and who is responsible for response procedures. If incidents can be automatically managed, escalation can occur transparently.</p>
<p><strong>3. Investigation and Diagnosis</strong></p>
<p>Once incident tasks are assigned, staff can begin investigating the type, cause, and possible solutions for an incident. After an incident is diagnosed, you can determine the appropriate remediation steps. This includes notifying any relevant staff, customers, or authorities about the incident and any expected disruption of services.</p>
<p><strong>4. Resolution and Recovery</strong></p>
<p>Resolution and recovery involve eliminating threats or root causes of issues and restoring systems to full functioning. Depending on incident type or severity, this may require multiple stages to ensure that incidents don’t reoccur.</p>
<p>For example, if the incident involves a malware infection, you often cannot simply delete the malicious files and continue operations. Instead, you need to create a clean copy of your infected systems, isolate the infected components, and fully replace systems to ensure that the infection doesn’t spread.</p>
<p><strong>5. Incident Closure</strong></p>
<p>Closing incidents typically involves finalizing documentation and evaluating the steps taken during response. This evaluation helps teams identify areas of improvement and proactive measures that can help prevent future incidents.</p>
<p>Incident closure may also involve providing a report or retrospective to administrative teams, board members, or customers. This information can help rebuild any trust that may have been lost and creates transparency regarding your operations.</p>
<h4><strong>Tips for Improving Your Incident Management Process</strong></h4>
<p>When defining your incident management processes, the following tips can help you ensure that your processes are effective. These tips can also help ensure that your team is able to adopt processes reliably.</p>
<p><strong>Train and Support Employees </strong></p>
<p>Properly training employees at all levels of your organization can significantly benefit incident management processes. When non-IT staff are aware of how to identify and report incidents, your IT teams can respond faster and need to spend less time interpreting reports. When IT staff are properly trained, they are more effective at working together and can use tools more efficiently.</p>
<p><strong>Set Alerts That Matter</strong></p>
<p>Avoiding alert overload is one of the most important aspects of incident management. If your teams are drowning in alerts, incidents are likely to be overlooked and response times are longer. To avoid this, you should carefully plan how events are categorized and what those categories mean for alerts.</p>
<p>When defining incident alerts you may find it helpful to start by defining your service level indicators. You can use these indicators to determine a hierarchy of functioning that prioritizes root causes over surface-level symptoms. An alert informing teams that a server went down is more useful and effective than 30 alerts, one for each service on that server.</p>
<p><strong>Prepare Your Team for On-Call</strong></p>
<p>With alert priorities determined, you also need to account for who is responding to those alerts. Defining an on-call schedule helps you ensure that a responder with the appropriate skills and permissions is always available. On-call procedures can also help you ensure that alerts are properly escalated.</p>
<p>After each shift, consider adjusting on-call duties according to the amount of effort that individual staff made. This can ensure your team members aren’t getting overwhelmed. For example, if one team member responds to multiple high-priority incidents in a shift, they should get more time off-call than someone who didn’t have to respond.</p>
<p><strong>Establishing Communication Guidelines</strong></p>
<p>Establishing effective communication is critical to team collaboration and effectiveness. One way to protect and ensure communication is to create guidelines. These guidelines can specify what channels staff should use, what content is expected in those channels, and how communications should be documented.</p>
<p>Clear guidelines can help diffuse tension and blame during stressful response periods by presenting a standard for how employees are expected to interact. Additionally, when communications are documented, teams can refer back to verify content and more easily pass on information without losing detail. This can reduce frustration overall, including the chance of misdirected stress.</p>
<p><strong>Streamline Change Processes</strong></p>
<p>Depending on the systems you are using and your responders’ expertise, you may need to verify or confirm changes required for response. You want to prevent responders from enacting harmful changes or from getting stuck waiting for unnecessary approval.</p>
<p>One option is to clearly identify what levels or types of changes individual staff can make and who they can go to for approval when needed.</p>
<p>If your system requires all changes to be approved by a change advisory board (CAB) you need to ensure that the board is readily available. If board members cannot give the same availability as your responders, you need to put emergency override procedures in place to prevent excess damage.</p>
<p><strong>Improve Systems With Lessons Learned</strong></p>
<p>Reviews should evaluate the reason for the incident and work to identify if any preventative measures can be taken against future incidents. If so, teams need to define and assign tasks to take those measures immediately. Additionally, reviews can help ensure that any remaining incident documentation is completed. This is necessary for liability and compliance auditing.</p>
<p>&nbsp;</p>
<h4><strong>How to Use Alerting to Improve Your Incident Management Process</strong></h4>
<p>The quality of your incident management processes rely heavily on how you generate and manage alerts. If you do not have strong alerting practices or systems in place, your incident management is bound to be disorganized and slow. To avoid poor management and ensure high quality processes, keep the following tips in mind.</p>
<p><strong>Define Your Monitoring and Alerting Strategy</strong></p>
<p>Monitoring and alerting strategies define which system components you are monitoring, the importance of those components, and how issues with those components are conveyed. Your monitoring goal should be to create centralized, continuous visibility of your systems. Your alert goals should be to reduce false positives or negatives, and to ensure that alerts are meaningful.</p>
<p>When creating your strategies, it helps to start small and with the most critical components of your systems. Eventually you should be monitoring environments in their entirety but you need to ensure system stability before you can do this. If you focus on the most important components first, you ensure that systems remain operational and grant yourself time for optimizations.</p>
<p><strong>Go Beyond Ticketing Systems</strong></p>
<p>Ticketing systems can be useful for tracking issues and providing customer support but are often not the best tool for incident management. These systems typically require information to be manually filed before tasks can be addressed and can significantly slow response times.</p>
<p>This manual requirement is especially problematic for customer-facing systems, where users may simply abandon your service rather than reporting an issue. If you integrate your monitoring and response tools you can work to avoid this abandonment.</p>
<p>If you need to use a ticketing system, you should automate as much of the ticket creation process as possible to reduce delays. Otherwise, consider adopting tools that enable your teams to communicate about, investigate, and respond to alerts from a single platform. Even if tools can’t perform these capabilities inherently, there are options for integrations that automate the transfer of information or are able to trigger actions across tooling.</p>
<p><strong>Create a Minimal Runbook</strong></p>
<p>Runbooks are essentially collections of scripts or procedures that you can use to automate or outline processes. With runbooks you can standardize processes and create a shared knowledge base of actions for your team. Once runbooks are defined, you can assign books directly to alert details or specific events.</p>
<p>Alternatively, you can provide a library of runbooks to your responders with guidelines for when they should use specific books. This enables you to distribute skills and expertise across your response tiers, ensuring that even lower-level staff can perform required response actions with ease.</p>
<p>One caveat of runbooks is that the information contained can be time consuming to maintain. Detailed books need to be verified and updated with every system change to prevent books from becoming outdated or harmful. Creating minimal runbooks is one way to avoid this maintenance. With these guides, you can still share basic information across your team with minimal maintenance.</p>
<h4><strong>Presplay: Agile, Rock-Solid Incident Alert Management </strong></h4>
<p>Presplay Cloud is a SaaS-based incident alert management system that can be easily integrated into incident management tools and hosted in secure, SSAE-16 compliant hosting facilities across the U.S. It provides instant visibility and feedback on incident status, tracks alert delivery and ticket status, and offers solid reliability, ensuring critical incidents are captured and addressed by the relevant teams.</p>
<p>Presplay’s incident management features include:</p>
<ul>
<li><b>Automation of alerts</b>—users can set their own escalation policy and alert the next person on the on-call list if the first person does not respond in a timely manner</li>
<li><b>Real-time reporting</b>—captures real-time stats on individual and group workloads based on each responders’ alert volume and escalation order</li>
<li><b>Alerting across platforms—enables notifications via email, SMS, mobile push and phone calls</b></li>
<li><b>Mobile incident management</b>—gives incident responders full visibility into the incident and have quick, easy ways to respond so they can take quick action</li>
<li><b>Priority alerting</b>—allows the user to send messages in two different formats: High- priority and low-priority, with a unique ringtone and persistent alerts to ensure critical messages are not ignored</li>
<li><b>Open API and out-of-the-box integrations</b>—provides a publicly-available API, offering programmatic access to the software so organizations can integrate it with their existing solutions</li>
<li><b>Secure, real-time collaboration</b>—built-in two-way messaging which supports attachments and predefined responses, and complies with relevant standards</li>
<li><b>Digital scheduler</b>—automated management of on-call schedules, recurring on-call rotations and shifts, automating alerts according to staff schedules and rotations</li>
</ul>
]]></content:encoded>
					
					<wfw:commentRss>https://posts.presplay.cloud/incident-management-process-5-steps-to-effective-resolution/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>How to Perform a Successful Data Migration</title>
		<link>https://posts.presplay.cloud/how-to-perform-a-successful-data-migration/</link>
					<comments>https://posts.presplay.cloud/how-to-perform-a-successful-data-migration/#respond</comments>
		
		<dc:creator><![CDATA[Presplay]]></dc:creator>
		<pubDate>Wed, 09 Sep 2020 14:18:59 +0000</pubDate>
				<category><![CDATA[Cloud Database]]></category>
		<category><![CDATA[opinion]]></category>
		<guid isPermaLink="false">https://presplay.cloud/?p=12205</guid>

					<description><![CDATA[Raymond Kurzweil, a futurist and author of The Singularity is Near, the growth of progress is exponential. I.e. we’ll see 20,000 years of progress in the 21st century rather than 100 years of progress. Kurzweil also suggests that by the year 2020, the growth of integrated circuits will slow, and “another paradigm” will replace that and carry the&#8230;]]></description>
										<content:encoded><![CDATA[<p><img decoding="async" src="https://www.optimum7.com/wp-content/uploads/2019/10/surge-46-article-image-1014x487.png" alt="eCommerce Data Migration and Replatforming for Enterprise Level Companies  (Over One Million SKUs)" /></p>
<p>Raymond Kurzweil, a futurist and author of <em><a href="https://www.amazon.com/Singularity-Near-Humans-Transcend-Biology/dp/0143037889/ref=sr_1_1?ie=UTF8&amp;qid=1495548887&amp;sr=8-1&amp;keywords=The+Singularity+is+Near" target="_blank" rel="noopener noreferrer">The Singularity is Near</a></em>, the growth of progress is exponential. I.e. we’ll see 20,000 years of progress in the 21st century rather than 100 years of progress. Kurzweil also suggests that by the year 2020, the growth of integrated circuits will slow, and “another paradigm” will replace that and carry the exponential growth.</p>
<p>Until the Singularity actually happens and data moves itself across systems, we’re stuck migrating data any time we want to move to a new server or software platform. Businesses in every industry amass vast amounts of data about their customers, employees, products, digital properties, and finances. Eventually, as systems become outdated, we need to move the usable and business-critical information from one system to another. Continually changing <a href="https://technologyadvice.com/data-warehousing/" target="_blank" rel="noopener noreferrer">data storage</a> means we need to start building concrete plans for validating, moving, and testing data.</p>
<h2><strong>Before You Migrate: Plan</strong></h2>
<p>Understand both the source and the target, meaning where the data comes from and where it’s going. For example: you’re porting ledgers and financial data from a desktop accounting system into QuickBooks Online. Once you understand the source and the target, you can start to map out your process. Understanding the IT environment where the migration will take place can help you make decisions about the speed and scope of work, as well as minimize inconvenience.</p>
<p>Plan the migration project according to your business objectives. It’s exciting to pick out new software and start learning a new system, but if the software doesn’t support business objectives and the migration doesn’t move forward with a minimal process disruption, the results can be disastrous. Try to include a stakeholder from every relevant team during the planning stage. Even straightforward internal migrations can fail due to business restrictions, security policies, or other problems that might not be immediately visible to insular employees.</p>
<blockquote><p><em><strong><span class="pull-right pulled-right">Try to include a stakeholder from every relevant team during the planning stage.</span></strong></em></p></blockquote>
<p>You should also watch out for known risks during the migration process. Often the data profiles and form structures in the old and new systems don’t perfectly match, which means data is liable to duplication or distortion during the transformation process. If your testing process fails to accurately translate real data, you may need to make some adjustments to your <a href="https://technologyadvice.com/blog/information-technology/how-to-use-an-api/" target="_blank" rel="noopener noreferrer">API configuration</a>, or call the vendor for support.</p>
<p>As with any project, you’ll want to consider migration costs as part of the investment you’ll make in a new software tool. <a href="http://www.oracle.com/technetwork/middleware/oedq/successful-data-migration-wp-1555708.pdf" target="_blank" rel="noopener noreferrer">Oracle suggests</a> that you factor the costs of any data migration into your investment calculations, rather than tacking it onto your overall costs as an afterthought. Depending on the complexity of the move, the cost and labor of the migration itself could make a new piece of technology prohibitively expensive, and ignoring the migration costs can set your team up for failure.</p>
<h2><strong>Plan Again—in Detail This Time</strong></h2>
<p>Most migrations take place through five major stages:</p>
<ul>
<li><strong>Extraction:</strong> remove data from the current system to begin working on it.</li>
<li><strong>Transformation:</strong> match data to its new forms, ensure that metadata reflects the data in each field.</li>
<li><strong>Cleansing:</strong> de-duplicate, run tests, and address any corrupted data.</li>
<li><strong>Validation: </strong>test and re-test that moving the data to the target location provides the expected response.</li>
<li><strong>Loading: </strong>transfer data into the new system, and review for errors again.</li>
</ul>
<p>During this time you’ll want to actually build your plan in a project management or task management software. Your plan should include burndown charts, task assignments, and dependency charts so all involved know their responsibilities and when each task is scheduled to take place. Take the time to do test runs and involve the entire team again to address as many unknowns as possible before the actual migration takes place.</p>
<h2><strong>Move the Data</strong></h2>
<p>Time to put your plan into action. Some teams migrate data on weekends or bank holidays to reduce disruption to business objectives, but this “big bang” migration has many problems of its own and can leave your IT team scrambling to clean up a botched attempt before everyone gets back to work. Others decide to run the old and new system concurrently and transfer data piecemeal. A parallel migration can extend the process, but also gives teams a chance to react to unforeseen difficulties.</p>
<h2><strong>Build a Repeatable Process</strong></h2>
<p>As new platforms continue to hit the market and businesses move faster every day, data migration will become a near-constant process in IT. Once you complete your first data migration, your team can run a full audit of the process to better understand strengths, weaknesses, and mistakes. Document everything in your project management software, and set up a clear, repeatable plan for the future.</p>
<p>If you’re approaching a data migration but still shopping for the next software platform, you can use our <a href="https://technologyadvice.com/smart-advisor/browse/" target="_blank" rel="noopener noreferrer">Product Selection Tool</a> to compare options and get a custom recommendation based on your needs.</p>
<div class="newsletter-cta-module newsletter-cta-footer row">
<div class="row newsletter-cta-module">
<div class="newsletter-cta-module__content col-xs-12 col-sm-7"></div>
</div>
</div>
]]></content:encoded>
					
					<wfw:commentRss>https://posts.presplay.cloud/how-to-perform-a-successful-data-migration/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Designing and Tuning for Performance</title>
		<link>https://posts.presplay.cloud/designing-and-tuning-for-performance/</link>
					<comments>https://posts.presplay.cloud/designing-and-tuning-for-performance/#respond</comments>
		
		<dc:creator><![CDATA[Presplay]]></dc:creator>
		<pubDate>Sun, 30 Aug 2020 18:58:28 +0000</pubDate>
				<category><![CDATA[opinion]]></category>
		<guid isPermaLink="false">https://presplay.cloud/?p=13865</guid>

					<description><![CDATA[A well-planned methodology is the key to success in performance tuning. Different tuning strategies vary in their effectiveness, and systems with different purposes, such as online transaction processing systems and decision support systems, require different tuning methods. When Is Tuning Most Effective? For best results its recommended you tune during the design phase, rather than&#8230;]]></description>
										<content:encoded><![CDATA[<p>A well-planned methodology is the key to success in performance tuning. Different tuning strategies vary in their effectiveness, and systems with different purposes, such as online transaction processing systems and decision support systems, require different tuning methods.</p>
<p><img decoding="async" class="aligncenter" src="https://lh3.googleusercontent.com/proxy/BU6mRKnAZzv2FeLLUsc9E8r1YVj__bfw9IKDvU2_w8tgzsHXTFU-QuFtsv4LXmUsUwGbdFfT8wWaT9BudI2QpFc-iBYkg-0rmZLjLwWyzbND" alt="Oracle SQL tuning steps" /></p>
<h2 class="H1"><span style="color: #330099; font-family: Arial, Helvetica, sans-serif;">When Is Tuning Most Effective?</span></h2>
<p>For best results its recommended you tune during the design phase, rather than waiting to tune after implementing your system. This is illustrated in the following sections:</p>
<ul class="LB1">
<li class="LB1" type="DISC"><a href="https://docs.oracle.com/cd/A87860_01/doc/server.817/a76992/ch2_meth.htm#2218">Proactive Tuning While Designing and Developing Systems</a></li>
<li class="LB1" type="DISC"><a name="4429"></a><a href="https://docs.oracle.com/cd/A87860_01/doc/server.817/a76992/ch2_meth.htm#2196">Reactive Tuning to Improve Production Systems</a></li>
</ul>
<h3 class="H2"><span style="color: #330099; font-family: Arial, Helvetica, sans-serif;">Proactive Tuning While Designing and Developing Systems</span></h3>
<p class="BP">By far, the most effective approach to tuning is the proactive approach. Begin by following the steps described in this chapter under <a href="https://docs.oracle.com/cd/A87860_01/doc/server.817/a76992/ch2_meth.htm#9247">&#8220;Prioritized Tuning Steps&#8221;</a>.<br />
Business executives should work with application designers to establish performance goals and set realistic performance expectations. During design and development, the application designers can then determine which combination of system resources and Oracle features best meet these needs.</p>
<p>By designing a system to perform well, you can minimize its implementation and on-going administration cost.<a href="https://docs.oracle.com/cd/A87860_01/doc/server.817/a76992/ch2_meth.htm#3114">Figure 1-1</a> illustrates the relative <em class="Italic">cost</em> of tuning during the life of an application.</p>
<h4 class="FT"><span style="font-family: Arial, Helvetica, sans-serif;"><em>Figure 1-1 Cost of Tuning During the Life of an Application</em></span></h4>
<p><img decoding="async" class="alignnone" src="https://docs.oracle.com/cd/A87860_01/doc/server.817/a76992/ch2_met2.gif" width="687" height="273" /></p>
<p class="BP">To complement this view, <a href="https://docs.oracle.com/cd/A87860_01/doc/server.817/a76992/ch2_meth.htm#3118">Figure 2-2</a> shows that the relative <em class="Italic">benefit</em> of tuning an application over the course of its life is inversely proportional to the cost expended.</p>
<h4 class="FT"><span style="font-family: Arial, Helvetica, sans-serif;"><em>Figure 2-2 Benefit of Tuning During the Life of an Application</em></span></h4>
<p><img decoding="async" src="https://docs.oracle.com/cd/A87860_01/doc/server.817/a76992/ch2_met3.gif" width="687" height="270" /></p>
<p class="BP">The most effective time to tune is during the design phase: you get the maximum benefit for the lowest cost.</p>
<h3 class="H2"><span style="color: #330099; font-family: Arial, Helvetica, sans-serif;">Reactive Tuning to Improve Production Systems</span></h3>
<p class="BP">The tuning process does not begin when users complain about poor response time. When response time is this poor, it is usually too late to implement some of the most effective tuning strategies. At that point, if you are unwilling to completely redesign the application, then you may only improve performance marginally by reallocating memory and tuning I/O.</p>
<p class="BP">For example: There is a bank that employs one teller and one manager. It has a business rule that the manager must approve withdrawals over $20. You find a long line of customers, and you decide that you need more tellers. You add 10 more tellers, but then you find that the bottleneck moves to the manager&#8217;s function. However, the bank determines that it is too expensive to hire additional managers. In this example, regardless of how carefully you tune the system using the existing business rule, getting better performance will be very expensive.</p>
<p class="BP">Alternatively, a change to the business rule may be necessary to make the system more scalable. If you change the rule so that the manager only needs to approve withdrawals exceeding $150, then you have created a scalable solution. In this situation, effective tuning could only be done at the highest design level, rather than at the end of the process.</p>
<p class="BP">It is possible to re-actively tune an existing production system. To take this approach, start at the bottom of the method and work your way up, finding and fixing any bottlenecks. A common goal is to make Oracle run faster on the given platform. You may find, however, that both the Oracle server and the operating system are working well. To get additional performance gains, you may need to tune the application or add resources. Only then can you take full advantage of the many features Oracle provides that can greatly improve performance when properly used in a well-designed system.</p>
<p class="BP">Even the performance of well-designed systems can degrade with use. Ongoing tuning is, therefore, an important part of proper system maintenance.</p>
<h2 class="H1"><span style="color: #330099; font-family: Arial, Helvetica, sans-serif;">Prioritized Tuning Steps</span></h2>
<p class="BP">The following steps provide a recommended method for tuning an Oracle database. These steps are prioritized in order of diminishing returns: steps with the greatest effect on performance appear first. For optimal results, therefore, resolve tuning issues in the order listed, from the design and development phases through instance tuning.</p>
<p class="BP"><a href="https://docs.oracle.com/cd/A87860_01/doc/server.817/a76992/ch2_meth.htm#390">Step 1: Tune the Business Rules</a><a name="2556"></a></p>
<p class="BP"><a href="https://docs.oracle.com/cd/A87860_01/doc/server.817/a76992/ch2_meth.htm#400">Step 2: Tune the Data Design</a><a name="2557"></a></p>
<p class="BP"><a href="https://docs.oracle.com/cd/A87860_01/doc/server.817/a76992/ch2_meth.htm#415">Step 3: Tune the Application Design</a><a name="2558"></a></p>
<p class="BP"><a href="https://docs.oracle.com/cd/A87860_01/doc/server.817/a76992/ch2_meth.htm#423">Step 4: Tune the Logical Structure of the Database</a><a name="2559"></a></p>
<p class="BP"><a href="https://docs.oracle.com/cd/A87860_01/doc/server.817/a76992/ch2_meth.htm#436">Step 5: Tune Database Operations</a><a name="2560"></a></p>
<p class="BP"><a href="https://docs.oracle.com/cd/A87860_01/doc/server.817/a76992/ch2_meth.htm#450">Step 6: Tune the Access Paths</a><a name="2561"></a></p>
<p class="BP"><a href="https://docs.oracle.com/cd/A87860_01/doc/server.817/a76992/ch2_meth.htm#469">Step 7: Tune Memory Allocation</a><a name="2593"></a></p>
<p class="BP"><a href="https://docs.oracle.com/cd/A87860_01/doc/server.817/a76992/ch2_meth.htm#493">Step 8: Tune I/O and Physical Structure</a><a name="2562"></a></p>
<p class="BP"><a href="https://docs.oracle.com/cd/A87860_01/doc/server.817/a76992/ch2_meth.htm#504">Step 9: Tune Resource Contention</a><a name="2563"></a></p>
<p class="BP"><a href="https://docs.oracle.com/cd/A87860_01/doc/server.817/a76992/ch2_meth.htm#514">Step 10: Tune the Underlying Platform(s)</a><a name="2075"></a></p>
<p class="BP">After completing these steps, reassess your database performance, and decide whether further tuning is necessary.<a name="2383"></a></p>
<p class="BP">Tuning is an iterative process. Performance gains made in later steps may pave the way for further improvements in earlier steps, so additional passes through the tuning process may be useful.</p>
<p class="BP"><a href="https://docs.oracle.com/cd/A87860_01/doc/server.817/a76992/ch2_meth.htm#3127">Figure 2-3</a> illustrates the tuning method:</p>
<h4 class="FT"><span style="font-family: Arial, Helvetica, sans-serif;"><em>Figure 2-3 The Tuning Method</em></span></h4>
<p><img loading="lazy" decoding="async" src="https://docs.oracle.com/cd/A87860_01/doc/server.817/a76992/ch2_meta.gif" width="687" height="641" /></p>
<p class="BP">Decisions you make in one step may influence subsequent steps. For example, in step 5 you may rewrite some of your SQL statements. These SQL statements may have significant bearing on parsing and caching issues addressed in step 7. Also, disk I/O, which is tuned in step 8, depends on the size of the buffer cache, which is tuned in step 7. Although the figure shows a loop back to step 1, you may need to return from any step to any previous step.</p>
<h3 class="H2"><span style="color: #330099; font-family: Arial, Helvetica, sans-serif;">Step 1: Tune the Business Rules</span></h3>
<p class="BP">For optimal performance, you may need to adapt business rules. These concern the high-level analysis and design of an entire system. Configuration issues are considered at this level, such as whether to use a multi-threaded server system-wide. In this way, the planners ensure that the performance requirements of the system correspond directly to concrete business needs.</p>
<p class="BP">Performance problems encountered by DBAs may actually be caused by problems in design and implementation, or by inappropriate business rules. Designers sometimes provide far greater detail than is needed when they write business functions for an application. They document an implementation, rather than simply the function that must be performed. If business executives effectively distill business functions or requirements from the implementation, then designers have more freedom when selecting an appropriate implementation.</p>
<p class="BP">Consider the business function of printing checks. The actual requirement is to pay money to people, not necessarily to print pieces of paper. Whereas it would be very difficult to print a million checks per day, it would be relatively easy to record that many direct deposit payments on a tape that could be sent to the bank for processing.</p>
<p class="BP">Business rules should be consistent with realistic expectations for the number of concurrent users, the transaction response time, and the number of records stored online that the system can support. For example, it does not make sense to run a highly interactive application over slow, wide area network lines.</p>
<p class="BP">Similarly, a company soliciting users for an Internet service might advertise 10 free hours per month for all new subscribers. If 50,000 users per day signed up for this service, then the demand far exceeds the capacity for a client/server configuration. The company should instead consider using a multi-tier configuration. In addition, the signup process must be simple: it should require only one connection from the user to the database, or connection to multiple databases without dedicated connections, using a multi-threaded server or transaction monitor approach.</p>
<h3 class="H2"><span style="color: #330099; font-family: Arial, Helvetica, sans-serif;">Step 2: Tune the Data Design</span></h3>
<p class="BP">In the data design phase, you must determine what data is needed by your applications. You must consider what relations are important, and what their attributes are. Finally, you need to structure the information to best meet performance goals.</p>
<p class="BP">The database design process generally undergoes a normalization stage when data is analyzed to eliminate data redundancy. With the exception of primary keys, any one data element should be stored only once in your database. After the data is normalized, however, you may need to denormalize it for performance reasons. You might decide that the database should retain frequently used summary values. For example, rather than forcing an application to recalculate the total price of all the lines in a given order each time it is accessed, you might decide to always maintain a number representing the total value for each order in the database. You could set up primary key and foreign key indexes to access this information quickly.</p>
<p class="BP">Another data design consideration is avoiding data contention. Consider a database 1 terabyte in size on which one thousand users access only 0.5% of the data. This &#8220;hot spot&#8221; in the data could cause performance problems.</p>
<p class="BP">In a multiple-instance setup, try to localize access to the data down to the partition level, process, and instance levels. That is, localize access to data, such that any process requiring data within a particular set of values is confined to a particular instance. Contention begins when several remote processes simultaneously attempt to access one particular set of data.</p>
<p class="BP">In Oracle Parallel Server, look for synchronization points&#8211;any point in time, or part of an application that must run sequentially, one process at a time. The requirement of having sequential order numbers, for example, is a synchronization point that results from poor design.</p>
<p class="BP">Also consider implementing two Oracle8<em class="Italic">i</em> features that can help avoid contention:</p>
<ul class="LB1">
<li class="LB1" type="DISC"><a name="4655"></a>Consider partitioning your data.</li>
<li class="LB1" type="DISC"><a name="4656"></a>Consider using local or global indexes.</li>
</ul>
<h3 class="H2"><span style="color: #330099; font-family: Arial, Helvetica, sans-serif;">Step 3: Tune the Application Design</span></h3>
<p class="BP">Business executives and application designers should translate business goals into an effective system design. Business processes concern a particular application within a system, or a particular part of an application.</p>
<p class="BP">An example of intelligent process design is strategically caching data. For example, in a retail application, you can select the tax rate once at the beginning of each day, and cache it within the application. In this way, you avoid retrieving the same information over and over during the day.</p>
<p class="BP">At this level, you can also consider the configuration of individual processes. For example, some PC users may access the central system using mobile agents, where other users may be directly connected. Although they are running on the same system, the architecture for each type of user is different. They may also require different mail servers and different versions of the application.</p>
<h3 class="H2"><span style="color: #330099; font-family: Arial, Helvetica, sans-serif;">Step 4: Tune the Logical Structure of the Database</span></h3>
<p class="BP">After the application and the system have been designed, you can plan the logical structure of the database. This primarily concerns fine-tuning the index design to ensure that the data is neither over- nor under-indexed. In the data design stage (Step 2), you determine the primary and foreign key indexes. In the logical structure design stage, you may create additional indexes to support the application.</p>
<p class="BP">Performance problems due to contention often involve inserts into the same block or incorrect use of sequence numbers. Use particular care in the design, use, and location of indexes, as well as in using the sequence generator and clusters.</p>
<h3 class="H2"><span style="color: #330099; font-family: Arial, Helvetica, sans-serif;">Step 5: Tune Database Operations</span></h3>
<p class="BP">Before tuning the Oracle server, be certain that your application is taking full advantage of the SQL language and the Oracle features designed to enhance application processing. Use features and techniques such as the following, based on the needs of your application:</p>
<ul class="LB1">
<li class="LB1" type="DISC"><a name="440"></a>Array processing</li>
<li class="LB1" type="DISC"><a name="2745"></a>The Oracle optimizer</li>
<li class="LB1" type="DISC"><a name="442"></a>The row-level lock manager</li>
<li class="LB1" type="DISC"><a name="444"></a>PL/SQL</li>
</ul>
<p class="BP">Understanding Oracle&#8217;s query processing mechanisms is also important for writing effective SQL statements.</p>
<p class="BP">Whether you are writing new SQL statements or tuning problematic statements in an existing application, your methodology for tuning database operations essentially concerns CPU and disk I/O resources.</p>
<ul class="LB1">
<li class="LB1" type="DISC"><a name="9146"></a><a href="https://docs.oracle.com/cd/A87860_01/doc/server.817/a76992/ch2_meth.htm#9152">Step 1: Find the Statements that Consume the Most Resources</a></li>
<li class="LB1" type="DISC"><a name="9150"></a><a href="https://docs.oracle.com/cd/A87860_01/doc/server.817/a76992/ch2_meth.htm#9179">Step 2: Tune These Statements To Use Fewer Resources</a></li>
</ul>
<h4 class="H3"><span style="color: #330099; font-family: Arial, Helvetica, sans-serif;">Step 1: Find the Statements that Consume the Most Resources</span><a name="9153"></a></h4>
<p class="BP">Focus your tuning efforts on statements where the benefit of tuning demonstrably exceeds the cost of tuning. Use tools such as <code>TKPROF</code>, the SQL trace facility, SQL Analyze, Oracle Trace, and the Enterprise Manager Tuning Pack to find the problem statements and stored procedures. Alternatively, you can query the <code>V$SORT_USAGE</code> view to see the session and SQL statement associated with a temporary segment.</p>
<p class="BP">The statements with the most potential to improve performance, if tuned, include:</p>
<ul class="LB1">
<li class="LB1" type="DISC"><a name="9156"></a>Those consuming greatest resource overall.</li>
<li class="LB1" type="DISC"><a name="9157"></a>Those consuming greatest resource per row.</li>
<li class="LB1" type="DISC"><a name="9158"></a>Those executed most frequently.</li>
</ul>
<p class="BP">In the <code>V$SQLAREA</code> view, you can find those statements still in the cache that have done a great deal of disk I/O and buffer gets. (Buffer gets show approximately the amount of CPU resource used.)</p>
<h4 class="H3"><span style="color: #330099; font-family: Arial, Helvetica, sans-serif;">Step 2: Tune These Statements To Use Fewer Resources</span></h4>
<p class="BP">Remember that application design is fundamental to performance. No amount of SQL statement tuning can make up for inefficient application design. If you encounter SQL statement tuning problems, then perhaps you need to change the application design.</p>
<p class="BP">You can use two strategies to reduce the resources consumed by a particular statement:</p>
<ul class="LB1">
<li class="LB1" type="DISC"><a name="9182"></a>Get the statement to use fewer resources.</li>
<li class="LB1" type="DISC"><a name="9183"></a>Use the statement less frequently.</li>
</ul>
<p class="BP">Statements may use more resources because they do the most work, or because they perform their work inefficiently&#8211;or they may do both. However, the lower the resource used per unit of work (per row processed), the more likely it is that you can significantly reduce resources used only by changing the application itself. That is, rather than changing the SQL, it may be more effective to have the application process fewer rows, or process the same rows less frequently.</p>
<p class="BP">These two approaches are not mutually exclusive. The former is clearly less expensive, because you should be able to accomplish it either without program change (by changing index structures) or by changing only the SQL statement itself rather than the surrounding logic.</p>
<h3 class="H2"><span style="color: #330099; font-family: Arial, Helvetica, sans-serif;">Step 6: Tune the Access Paths</span></h3>
<p class="BP">Ensure that there is efficient data access. Consider the use of clusters, hash clusters, B*-tree indexes, bitmap indexes, and optimizer hints. Also consider analyzing tables and using histograms to analyze columns in order to help the optimizer determine the best query plan.</p>
<p class="BP">Ensuring efficient access may mean adding indexes or adding indexes for a particular application and then dropping them again. It may also mean re-analyzing your design after you have built the database. You may want to further normalize your data or create alternative indexes. Upon testing the application, you may find that you are still not obtaining the required response time. If this happens, then look for more ways to improve the design.</p>
<h3 class="H2"><span style="color: #330099; font-family: Arial, Helvetica, sans-serif;">Step 7: Tune Memory Allocation</span></h3>
<p class="BP">Appropriate allocation of memory resources to Oracle memory structures can have a positive effect on performance.</p>
<p class="BP">Oracle8<em class="Italic">i</em> shared memory is allocated dynamically to the following structures, which are all part of the shared pool. Although you explicitly set the total amount of memory available in the shared pool, the system dynamically sets the size of each of the following structures contained within it:</p>
<ul class="LB1">
<li class="LB1" type="DISC"><a name="475"></a>The data dictionary cache</li>
<li class="LB1" type="DISC"><a name="477"></a>The library cache</li>
<li class="LB1" type="DISC"><a name="2779"></a>Context areas (if running a multi-threaded server)</li>
</ul>
<p class="BP">You can explicitly set memory allocation for the following structures:</p>
<ul class="LB1">
<li class="LB1" type="DISC"><a name="481"></a>Buffer cache</li>
<li class="LB1" type="DISC"><a name="3327"></a>Log buffer</li>
<li class="LB1" type="DISC"><a name="3329"></a>Sequence caches</li>
</ul>
<p class="BP">Proper allocation of memory resources improves cache performance, reduces parsing of SQL statements, and reduces paging and swapping.</p>
<p class="BP">Process local areas include:</p>
<ul class="LB1">
<li class="LB1" type="DISC"><a name="2786"></a>Context areas (for systems not running a multi-threaded server)</li>
<li class="LB1" type="DISC"><a name="2791"></a>Sort areas</li>
<li class="LB1" type="DISC"><a name="2792"></a>Hash areas</li>
</ul>
<p class="BP">Be careful not to allocate to the system global area (SGA) such a large percentage of the machine&#8217;s physical memory that it causes paging or swapping.</p>
<h3 class="H2"><span style="color: #330099; font-family: Arial, Helvetica, sans-serif;">Step 8: Tune I/O and Physical Structure</span></h3>
<p class="BP">Disk I/O tends to reduce the performance of many software applications. The Oracle server, however, is designed so that its performance is not unduly limited by I/O. Tuning I/O and physical structure involves these procedures:</p>
<ul class="LB1">
<li class="LB1" type="DISC"><a name="3335"></a>Distributing data so that I/O is distributed to avoid disk contention.</li>
<li class="LB1" type="DISC"><a name="497"></a>Storing data in data blocks for best access: setting an adequate number of free lists and using proper values for <code>PCTFREE</code> and <code>PCTUSED</code>.</li>
<li class="LB1" type="DISC"><a name="499"></a>Creating extents large enough for your data, to avoid dynamic extension of tables. This adversely affects the performance of high-volume OLTP applications.</li>
<li class="LB1" type="DISC"><a name="2808"></a>Evaluating the use of raw devices.</li>
</ul>
<h3 class="H2"><span style="color: #330099; font-family: Arial, Helvetica, sans-serif;">Step 9: Tune Resource Contention</span></h3>
<p class="BP">Concurrent processing by multiple Oracle users may create contention for Oracle resources. Contention may cause processes to wait until resources are available. Take care to reduce the following types of contention:</p>
<ul class="LB1">
<li class="LB1" type="DISC"><a name="508"></a>Block contention</li>
<li class="LB1" type="DISC"><a name="2004"></a>Shared pool contention</li>
<li class="LB1" type="DISC"><a name="2809"></a>Lock contention</li>
<li class="LB1" type="DISC"><a name="2810"></a>Pinging (in a parallel server environment)</li>
<li class="LB1" type="DISC"><a name="2811"></a>Latch contention</li>
</ul>
<h3 class="H2"><span style="color: #330099; font-family: Arial, Helvetica, sans-serif;">Step 10: Tune the Underlying Platform(s)</span></h3>
<p class="BP">See your platform-specific Oracle documentation for ways to tune the underlying system. For example, on UNIX-based systems you might want to tune the following:</p>
<ul class="LB1">
<li class="LB1" type="DISC"><a name="518"></a>Size of the UNIX buffer cache</li>
<li class="LB1" type="DISC"><a name="520"></a>Logical volume managers</li>
<li class="LB1" type="DISC"><a name="522"></a>Memory and size for each process</li>
</ul>
<h2 class="H1"><span style="color: #330099; font-family: Arial, Helvetica, sans-serif;">Applying the Tuning Method</span></h2>
<p class="BP">This section explains how to apply the tuning method:</p>
<ul class="LB1">
<li class="LB1" type="DISC"><a name="531"></a><a href="https://docs.oracle.com/cd/A87860_01/doc/server.817/a76992/ch2_meth.htm#545">Set Clear Goals for Tuning</a></li>
<li class="LB1" type="DISC"><a name="2613"></a><a href="https://docs.oracle.com/cd/A87860_01/doc/server.817/a76992/ch2_meth.htm#555">Create Minimum Repeatable Tests</a></li>
<li class="LB1" type="DISC"><a name="2614"></a><a href="https://docs.oracle.com/cd/A87860_01/doc/server.817/a76992/ch2_meth.htm#561">Test Hypotheses</a></li>
<li class="LB1" type="DISC"><a name="2615"></a><a href="https://docs.oracle.com/cd/A87860_01/doc/server.817/a76992/ch2_meth.htm#567">Keep Records and Automate Testing</a></li>
<li class="LB1" type="DISC"><a name="2616"></a><a href="https://docs.oracle.com/cd/A87860_01/doc/server.817/a76992/ch2_meth.htm#577">Avoid Common Errors</a></li>
<li class="LB1" type="DISC"><a name="2617"></a><a href="https://docs.oracle.com/cd/A87860_01/doc/server.817/a76992/ch2_meth.htm#601">Stop Tuning When Objectives Are Met</a></li>
<li class="LB1" type="DISC"><a name="2618"></a><a href="https://docs.oracle.com/cd/A87860_01/doc/server.817/a76992/ch2_meth.htm#605">Demonstrate Meeting the Objectives</a></li>
</ul>
<h3 class="H2"><span style="color: #330099; font-family: Arial, Helvetica, sans-serif;">Set Clear Goals for Tuning</span></h3>
<p class="BP">Never begin tuning without having first established clear objectives: you cannot succeed without a definition of &#8220;success.&#8221;</p>
<p class="BP">&#8220;Just make it go as fast as you can&#8221; may sound like an objective, but it is very difficult to determine whether this has been achieved. It is even more difficult to tell whether your results have met the underlying business requirements. A more useful objective is: &#8220;We need to have as many as 20 operators, each entering 20 orders per hour, and the packing lists must be produced within 30 minutes of the end of the shift.&#8221;</p>
<p class="BP">Keep your goals in mind as you consider each tuning measure. Consider its performance benefits in light of your goals.</p>
<p class="BP">Also remember that your goals may conflict. For example, to achieve best performance for a specific SQL statement, you may need to sacrifice the performance of other SQL statements running concurrently on your database.</p>
<h3 class="H2"><span style="color: #330099; font-family: Arial, Helvetica, sans-serif;">Create Minimum Repeatable Tests</span></h3>
<p class="BP">Create a series of minimum repeatable tests. For example, if you identify a single SQL statement that is causing performance problems, then run both the original and the revised version of that statement in SQL*Plus (with the SQL Trace Facility or Oracle Trace enabled), so that you can see statistically the difference in performance. In many cases, a tuning effort can succeed simply by identifying one SQL statement that was causing the performance problem.</p>
<p class="BP">For example, assume that you need to reduce a 4-hour run to 2 hours. To do this, perform your trial runs using a test environment similar to the production environment. For example, you could impose additional restrictive conditions, such as processing one department instead of all 500 departments. The ideal test case should run for more than 1 minute but probably not longer than 5, so you can intuitively detect improvements. You should also measure the test run using timing features.</p>
<h3 class="H2"><span style="color: #330099; font-family: Arial, Helvetica, sans-serif;">Test Hypotheses</span></h3>
<p class="BP">With a minimum repeatable test established, and with a script both to conduct the test and to summarize and report the results, you can test various hypotheses to see the effect.</p>
<p class="BP">Remember that with Oracle&#8217;s caching algorithms, the first time data is cached there is more overhead than when the same date is later accessed from memory. Thus, if you perform two tests, one after the other, then the second test should run faster then the first. This is because data that the test run would otherwise have had to read from disk may instead be more quickly retrieved from the cache.</p>
<h3 class="H2"><span style="color: #330099; font-family: Arial, Helvetica, sans-serif;">Keep Records and Automate Testing</span></h3>
<p class="BP">Keep records of the effect of each change by incorporating record keeping into the test script. You also should automate testing. Automation provides a number of advantages:</p>
<ul class="LB1">
<li class="LB1" type="DISC"><a name="571"></a>It permits cost effectiveness in terms of the tuner&#8217;s ability to conduct tests quickly.</li>
<li class="LB1" type="DISC"><a name="573"></a>It helps ensure that tests are conducted in the same systematic way, using the same instrumentation for each hypothesis you are testing.</li>
</ul>
<p class="BP">You should also carefully check test results derived from observations of system performance against the objective data before accepting them.</p>
<h3 class="H2"><span style="color: #330099; font-family: Arial, Helvetica, sans-serif;">Avoid Common Errors</span></h3>
<p class="BP">A common error made by inexperienced tuners is to adhere to preconceived notions about what may be causing the problem. The next most common error is to attempt various solutions at random.</p>
<p class="BP">Scrutinize your resolution process by developing a written description of your theory of what you think the problem is. This often helps you detect mistakes, simply from articulating your ideas. For best results, consult a team of people to help resolve performance problems. While a performance tuner can tune SQL statements without knowing the application in detail, the team should include someone who understands the application and who can validate the solutions the SQL tuner may devise.</p>
<h4 class="H3"><span style="color: #330099; font-family: Arial, Helvetica, sans-serif;">Avoid Poorly Thought Out Solutions</span></h4>
<p class="BP">Beware of changing something in the system by guessing. Or, once you have a hypothesis that you have not completely thought through, you may be tempted to implement it globally. Doing this in haste can seriously degrade system performance to the point where you may have to rebuild part of your environment from backups.</p>
<h4 class="H3"><span style="color: #330099; font-family: Arial, Helvetica, sans-serif;">Avoid Preconceptions</span></h4>
<p class="BP">Try to avoid preconceptions when you address a tuning problem. Ask users to describe performance problems. However, do not expect users to know why the problem exists.</p>
<p class="BP">One user, for example, had serious system memory problems over a long period of time. During the morning, the system ran well, but performance rapidly degraded in the afternoon. A consultant tuning the system was told that a PL/SQL memory leak was the cause. As it turned out, this was not at all the problem.</p>
<p class="BP">Instead, the user had set <code>SORT_AREA_SIZE</code> to 10MB on a machine with 64 MB of memory serving 20 users. When users logged on to the system, the first time they executed a sort, their sessions were assigned to a sort area. Each session held the sort area for the duration of the session. So, the system was burdened with 200MB of virtual memory, hopelessly swapping and paging.</p>
<h3 class="H2"><span style="color: #330099; font-family: Arial, Helvetica, sans-serif;">Stop Tuning When Objectives Are Met</span></h3>
<p class="BP">One of the great advantages of having targets for tuning is that it becomes possible to define success. Past a certain point, it is no longer cost effective to continue tuning a system.</p>
<h3 class="H2"><span style="color: #330099; font-family: Arial, Helvetica, sans-serif;">Demonstrate Meeting the Objectives</span></h3>
<p class="BP">As the tuner, you may be confident that performance targets have been met. Nonetheless, you must demonstrate this to two communities:</p>
<ul class="LB1">
<li class="LB1" type="DISC"><a name="609"></a>The users affected by the problem.</li>
<li class="LB1" type="DISC"><a name="611"></a>Those responsible for the application&#8217;s success.</li>
</ul>
]]></content:encoded>
					
					<wfw:commentRss>https://posts.presplay.cloud/designing-and-tuning-for-performance/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>STEPS ON HOW TO SETUP AN ORACLE GOLDENGATE HUB</title>
		<link>https://posts.presplay.cloud/steps-on-how-to-setup-an-oracle-goldengate-hub/</link>
					<comments>https://posts.presplay.cloud/steps-on-how-to-setup-an-oracle-goldengate-hub/#respond</comments>
		
		<dc:creator><![CDATA[Presplay]]></dc:creator>
		<pubDate>Tue, 25 Aug 2020 15:36:55 +0000</pubDate>
				<category><![CDATA[Cloud Database]]></category>
		<category><![CDATA[opinion]]></category>
		<category><![CDATA[Oracle]]></category>
		<guid isPermaLink="false">https://presplay.cloud/?p=13719</guid>

					<description><![CDATA[In this post, we will provide the steps to setup a GoldenGate hub.  In our hypothetical environment, we have a medium sized company, with two offices, one in San Jose and one in Connecticut.  The HR departments are logically separated between the west coast and South coasts, so the west coast schema is HRWEST and&#8230;]]></description>
										<content:encoded><![CDATA[<p>In this post, we will provide the steps to setup a GoldenGate hub.  In our hypothetical environment, we have a medium sized company, with two offices, one in San Jose and one in Connecticut.  The HR departments are logically separated between the west coast and South coasts, so the west coast schema is HRWEST and the south coast schema is HRSOUTH.</p>
<p>All data in the HRWEST schema is for east coast employees, and all data for the HRSOUTH schema is for the west coast employees.  The HRWEST schema is updated on the hrwestrh7 VM and the data is replicated using GoldenGate (one way) to hrsouthrh7.  Similarly, the data from the HRSOUTH schema is replicated (one way) to the hrwestrh7 VM.</p>
<p>In this first article, will cover the implementation of a GoldenGate hub in the environment described above.  In the second part, we will discuss how to enable the schemas for bi-directional replication.</p>
<p>Among the uses of a GoldenGate hub is the movement of Oracle data to Amazon Web Services (AWS).   There are several steps unique to that process that we are not covering here, but we will cover in a future post.</p>
<p>Step one is to install the Oracle client on the database hub server.  The name of that server in this environment is presplaycld.</p>
<p>The first step is to install the most recent Oracle client on the GoldenGate hub server, choosing the administrator installation. In a hub configuration, GoldenGate is installed only on the hub server, it is not installed on the database servers.  The Oracle client is installed on the hub server.  We will be using the thick client.</p>
<p><img decoding="async" src="http://houseofbrick.com/wp-content/uploads/2017/12/Oracle_GGHub.png" /></p>
<p>This blog forgoes covering the steps for the remainder of the client installation.  Select the defaults after choosing administrator.</p>
<p>Next, install the GoldenGate software.  Follow the instructions for the most recent version of GoldenGate, and install it in the directory listed below (for ease of installation and to follow along with this blog):</p>
<p>/u01/app/oracle/product/12.1.0/gghome</p>
<p>Also for ease of use, define the GoldenGate home directory as GGHOME in a Linux system variable at system start, or use an Oracle user login.  Put the line ‘export GGHOME=/u01/app/oracle/product/12.1.0/gghome_1’ in the .bash_profile to set it at user login, or in the file /etc/profile (to set system wide).</p>
<p>Next, put the client home directory in /etc/oratab:</p>
<pre>client:/u01/app/oracle/product/12.1.0/client:N</pre>
<p>&nbsp;</p>
<p>Next, our tnsnames entries:</p>
<pre>HRWESTDB =
 (DESCRIPTION =
 (ADDRESS = (PROTOCOL = TCP)(HOST = hrwestrh7)(PORT = 1521))
 (CONNECT_DATA =
 (SERVICE_NAME = WEST)
 )
 )

HRSOUTHDB =
 (DESCRIPTION =
 (ADDRESS = (PROTOCOL = TCP)(HOST = hrsouthrh7)(PORT = 1521))
 (CONNECT_DATA =
 (SERVICE_NAME = SOUTH)
 )
 )</pre>
<p>&nbsp;</p>
<p>The above entries go in this file on the hub server:</p>
<p>/u01/app/oracle/product/12.1.0/client/network/admin/tnsnames.ora</p>
<p>Define the TNS_ADMIN directory to point to the directory containing the tnsnames.ora file at either the system or the user login level as described above.</p>
<pre>export TNS_ADMIN= /u01/app/oracle/product/12.1.0/client/network/admin</pre>
<p>&nbsp;</p>
<p>Now lets begin.</p>
<p>1) Set the environment.</p>
<pre>. oraenv

ORACLE_SID= [oracle] client

The Oracle base remains unchanged at /u01/app/oracle</pre>
<p>&nbsp;</p>
<p>Note that setting the environment is required because GoldenGate needs various files binaries from the client installation in order to access remote Oracle databases.</p>
<p>2) Go the GoldenGate installation directory and start the set up process.</p>
<pre>[oracle@presplaycld ]$ cd $GGHOME

[oracle@presplaycld gghome]$ ./ggsci

Oracle GoldenGate Command Interpreter for Oracle
 Version 12.3.0.1.0 OGGCORE_12.3.0.1.0_PLATFORMS_170721.0154_FBO
 Linux, x64, 64bit (optimized), Oracle 12c on Jul 21 2017 23:31:13
 Operating system character set identified as UTF-8.

Copyright (C) 1995, 2017, Oracle and/or its affiliates. All rights reserved.

GGSCI (presplaycld) &gt; create subdirs

Creating subdirectories under current directory /u01/app/oracle/product/12.3/gghome

Parameter file                 /u01/app/oracle/product/12.3/gghome/dirprm: created.
 Report file                    /u01/app/oracle/product/12.3/gghome/dirrpt: created.
 Checkpoint file                /u01/app/oracle/product/12.3/gghome/dirchk: created.
 Process status files           /u01/app/oracle/product/12.3/gghome/dirpcs: created.
 SQL script files               /u01/app/oracle/product/12.3/gghome/dirsql: created.
 Database definitions files     /u01/app/oracle/product/12.3/gghome/dirdef: created.
 Extract data files             /u01/app/oracle/product/12.3/gghome/dirdat: created.
 Temporary files                /u01/app/oracle/product/12.3/gghome/dirtmp: created.
 Credential store files         /u01/app/oracle/product/12.3/gghome/dircrd: created.
 Masterkey wallet files         /u01/app/oracle/product/12.3/gghome/dirwlt: created.
 Dump files                     /u01/app/oracle/product/12.3/gghome/dirdmp: created.

GGSCI (presplaycld) &gt; edit param mgr

PORT 7865

GGSCI (presplaycld) &gt; start mgr
 Manager started.

Next, we set up the credentialstore.

GGSCI (presplaycld) &gt; add credentialstore

Credential store created in /u01/app/oracle/product/12.3/gghome/dircrd/.

GGSCI (presplaycld) &gt; alter credentialstore add user ggadmin@hrsouthdb password ggadmin alias ggadmine

Credential store in /u01/app/oracle/product/12.3/gghome/dircrd/ altered.

GSCI (presplaycld) &gt; alter credentialstore add user ggadmin@hrsouthdb password ggadmin alias ggadminw

Credential store in /u01/app/oracle/product/12.3/gghome/dircrd/ altered.</pre>
<p>&nbsp;</p>
<p>3) Configure the databases for GoldenGate.</p>
<p>On each database do the following.</p>
<p>Create the ggadmin user, grant privileges, and configure the database as shown below (in SQLPLUS):</p>
<pre>alter database add supplemental log data
 alter system set enable_goldengate_replication=true scope=both;
 create tablespace ggs_data datafile '/u01/app/oracle/oradata/ggdemo/ggs_data01.dbf'
 size 1024m autoextend on;
 create user ggadmin identified by ggadmin default tablespace ggs_data
 temporary tablespace temp;
 grant connect,resource,create session, alter session to
 ggadmin;
 grant select any dictionary, select any table,create table to
 ggadmin;
 grant alter any table to ggadmin;
 grant execute on utl_file to ggadmin;
 grant flashback any table to ggadmin;
 grant execute on dbms_flashback to ggadmin;
 @marker_setup.sql
 @ddl_setup.sql
 @role_setup.sql
 @ddl_enable.sql
 @sequence.sql
 EXEC DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE(‘GGADMIN’);
 grant ggs_ggsuser_role to ggadmin;
 @ddl_enable
 shutdown immediate;
 startup mount;
 alter database archivelog;
 alter database flashback on;
 alter database open;
 alter database add supplemental log data;
 alter database force logging;
 grant EXECUTE on dbms_logmnr_d to GGADMIN;
 grant SELECT on sys.logmnr_buildlog to GGADMIN;
 GRANT EXECUTE ON UTL_FILE TO  GGADMIN;
 grant EXEMPT ACCESS POLICY to GGADMIN;</pre>
<p>4) Go back to ggsci to set up the replication.</p>
<pre>ggsci &gt; dblogin useridalias ggadmine

ggsci as ggadmin@hrwestdb &gt; add schematrandata hrwest ALLCOLS

ggsci as ggadmin@hrwestdb &gt; add schematrandata hrsouth ALLCOLS

ggsci as ggadmin@hrwestdb &gt; edit params extehre

extract extehre
 exttrail ./dirdat/ee
 tranlogoptions IntegratedParams (max_sga_size 256)
 discardfile ./dirrpt/silext01.dsc, append megabytes 50
 logallsupcols
 updaterecordformat compact
 reportcount every 2 hours, rate
 useridalias ggadmine
 tabe HRWEST.*;

ggsci as ggadmin@hrwestdb &gt; register extract extehre database

2017-11-29 17:11:40  INFO    OGG-02003  Extract extehre successfully registered with database at SCN 1980353.<strong>&lt;= Record this for future use.</strong></pre>
<p>&nbsp;</p>
<pre>GGSCI (presplaycld as ggadmin@hrwestdb) 8&gt; add extract extehre, integrated tranlog, begin now
 EXTRACT (Integrated) added.

GGSCI (presplaycld as ggadmin@hrwestdb) 26&gt; ADD EXTTRAIL ./dirdat/ee, EXTRACT extehre
 EXTTRAIL added.

GGSCI (presplaycld as ggadmin@hrwestdb) 27&gt; start extract extehre

Sending START request to MANAGER ...
 EXTRACT extehre starting</pre>
<p>&nbsp;</p>
<p>At this point, the data from the HRWEST schema is being replicated from the HRWEST database, but is not as yet being applied to the HRSOUTH database.  The next step is to import the initial data into the HRSOUTH database.  We need to import all data prior to the start of replication.  Datapump is one method for building this initial load as described below:</p>
<p>1 – In the HRSOUTH database, create a database link that points to HRWEST.</p>
<pre>SQL&gt; Create database link hrwest connect to system identified by system using ‘hrwest’;</pre>
<p>&nbsp;</p>
<p>2 – Create a SQLPLUS directory for datapump to use.  In this case, I used HOMEDIR: SQL&gt; Create directory homedir as ‘/home/oracle’;</p>
<p>3 – Use the datapump network link option to load the data:</p>
<pre>impdp directory=homedir schemas=hrwest table_exists_action=replace network_link=hrwest flashback_scn=1980352<strong> (one less than the SCN recorded from above)</strong></pre>
<p>&nbsp;</p>
<p>4 – The data will be imported into the HRSOUTHDB.</p>
<p>Now we are ready to configure the replicat.  This applies the changes being captured using the extract.</p>
<p>Connect to the hrsouth database as ggadmin from ggsci:</p>
<pre>ggsci&gt; dblogin useridalias ggadminw

GGSCI (presplaycld as ggadmin@hrsouthdb) &gt; add schematrandata hrwest ALLCOLS

GGSCI (presplaycld as ggadmin@hrsouthdb) &gt; add schematrandata hrsouth ALLCOLS

GGSCI (presplaycld as ggadmin@hrsouthdb) &gt; add replicat wesrepe integrated exttrail ./dirdat/ee  <strong>&lt;=</strong><strong> remember, this is the same as the exttrail declared for the extract</strong>.
 REPLICAT (Integrated) added.

edit params wesrepe

replicat wesrepe
 ASSUMETARGETDEFS
 DISCARDFILE ./dirrpt/weserep01.dsc
 DDL INCLUDE ALL
 USERIDALIAS ggadminw
 REPORTCOUNT EVERY 1 HOURS, RATE
 MAP HRWEST.*, TARGET HRWEST.*;

GGSCI (presplaycld as ggadmin@hrsouthdb) &gt; start replicat wesrepe</pre>
<p>&nbsp;</p>
<p>At this point, replication should be working from HRWESTDB to HRSOUTHDB.  Be sure to run updates in HRWESTDB to verify that everything is working correctly.</p>
<p>Now, follow the same process to replicate the HRSOUTH schema from HRSOUTHDB to HRWESTDB.</p>
<p>We are already connected to HRSOUTHDB, so we can define the extract here:</p>
<pre>ggsci as ggadmin@hrwestdb &gt; register extract extwhre database

2017-11-29 17:11:40  INFO    OGG-02003  Extract extwhrw successfully registered with database at SCN 1980353.<strong>&lt;=</strong><strong> Record this for future use.</strong>

GGSCI (presplaycld as ggadmin@hrsouthdb) &gt; add extract extwhrw, integrated tranlog, begin now
 EXTRACT (Integrated) added.

GGSCI (presplaycld as ggadmin@hrsouthdb) &gt; ADD EXTTRAIL ./dirdat/we, EXTRACT extwhrw
 EXTTRAIL added.

GGSCI (presplaycld as ggadmin@hrsouthdb) &gt; edit params extwhrw

extract extwhrw
 exttrail ./dirdat/ww
 tranlogoptions IntegratedParams (max_sga_size 256)
 discardfile ./dirrpt/orcext01.dsc, append megabytes 50
 logallsupcols
 updaterecordformat compact
 reportcount every 2 hours, rate
 useridalias ggadmino
 table HRSOUTH.*;

GGSCI (presplaycld as ggadmin@hrsouthdb) &gt; start extract extwhrw</pre>
<p>&nbsp;</p>
<p>Now set up the replicat:</p>
<ol>
<li>In the HRWEST database, create a database link that points to HRSOUTH.</li>
</ol>
<pre>SQL&gt; Create database link HRSOUTH connect to system identified by system using ‘hrsouth’;</pre>
<p>&nbsp;</p>
<ol start="2">
<li>Create a SQLPLUS directory for datapump to use.  In this case, I used HOMEDIR: SQL&gt; Create directory homedir as ‘/home/oracle’;</li>
<li>Use the datapump network link option to load the data:</li>
</ol>
<pre>impdp directory=homedir schemas=hrsouth table_exists_action=replace network_link=hrsouth flashback_scn=1980352 (one less than the SCN recorded from above)</pre>
<p>&nbsp;</p>
<ol start="4">
<li>The data will be imported into the HRWESTDB.</li>
</ol>
<p>Now we are ready to configure the replicat:</p>
<pre>GGSCI (presplaycld as ggadmin@hrwestdb) &gt;dblogin useridalias ggadmine

GGSCI (presplaycld as ggadmin@hrwestdb) &gt; add schematrandata hrsouth ALLCOLS

GGSCI (presplaycld as ggadmin@hrwestdb) &gt; add replicat easrepw integrated exttrail ./dirdat/we  &lt;= remember this is the same as the exttrail declared for the extract.
 REPLICAT (Integrated) added.

edit params easrepw

replicat easrepw
 ASSUMETARGETDEFS
 DISCARDFILE ./dirrpt/easrepw01.dsc
 DDL INCLUDE ALL
 USERIDALIAS ggadmine
 REPORTCOUNT EVERY 1 HOURS, RATE
 MAP HRSOUTH.*, TARGET HRSOUTH.*;

GGSCI (presplaycld as ggadmin@hrsouthdb) &gt; start replicat easrepw</pre>
<p>&nbsp;</p>
<p>At this point, we have replication running from HRSOUTHDB, schema HRSOUTH, to HRWESTDB. Schema HRWEST is replicating from HRWESTDB to HRSOUTHDB.  We are using a GoldenGate hub server, on a third system, to manage the replication process.</p>
<p>Please note that you will need to run updates in each schema in order to verify that everything is working properly.</p>
<p>In part two, we will make the necessary changes to allow for multi-master replication, in case we need to be able to run it in both databases.</p>
<p>&nbsp;</p>
<p><em>Please note: this blog contains code examples provided for your reference. All sample code is provided for illustrative purposes only. Use of information appearing in this blog is solely at your own risk. Please read our full disclaimer for details.</em></p>
<article class="post cf">
<div class="ssba-classic-2 ssba ssbp-wrap left ssbp--theme-1">
<div><span class="ssba-share-text">Share with your network</span><a class="ssba_linkedin_share ssba_share_link" href="http://www.linkedin.com/shareArticle?mini=true&amp;url=http://houseofbrick.com/setup-of-an-oracle-goldengate-hub/" target="&quot;_blank&quot;" rel="noopener noreferrer" data-site="linkedin"><img decoding="async" class="ssba ssba-img" title="LinkedIn" src="http://houseofbrick.com/wp-content/plugins/simple-share-buttons-adder/buttons/somacro/linkedin.png" alt="Share on LinkedIn" /></a><a class="ssba_twitter_share" href="http://twitter.com/share?url=http://houseofbrick.com/setup-of-an-oracle-goldengate-hub/&amp;text=Setup%20of%20an%20Oracle%20GoldenGate%20Hub%20" target="&quot;_blank&quot;" rel="noopener noreferrer" data-site=""><img decoding="async" class="ssba ssba-img" title="Twitter" src="http://houseofbrick.com/wp-content/plugins/simple-share-buttons-adder/buttons/somacro/twitter.png" alt="Tweet about this on Twitter" /></a><a class="ssba_facebook_share" href="http://www.facebook.com/sharer.php?u=http://houseofbrick.com/setup-of-an-oracle-goldengate-hub/" target="_blank" rel="noopener noreferrer" data-site=""><img decoding="async" class="ssba ssba-img" title="Facebook" src="http://houseofbrick.com/wp-content/plugins/simple-share-buttons-adder/buttons/somacro/facebook.png" alt="Share on Facebook" /></a><a class="ssba_email_share" href="mailto:?subject=Setup%20of%20an%20Oracle%20GoldenGate%20Hub&amp;body=%20http://houseofbrick.com/setup-of-an-oracle-goldengate-hub/" data-site="email"><img decoding="async" class="ssba ssba-img" title="Email" src="http://houseofbrick.com/wp-content/plugins/simple-share-buttons-adder/buttons/somacro/email.png" alt="Email this to someone" /></a></div>
</div>
</article>
<div class="comments"></div>
]]></content:encoded>
					
					<wfw:commentRss>https://posts.presplay.cloud/steps-on-how-to-setup-an-oracle-goldengate-hub/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Working from home demands on Service Desk and IT cybersecurity</title>
		<link>https://posts.presplay.cloud/working-from-home-demands-on-service-desk-and-it-cybersecurity/</link>
					<comments>https://posts.presplay.cloud/working-from-home-demands-on-service-desk-and-it-cybersecurity/#respond</comments>
		
		<dc:creator><![CDATA[Presplay]]></dc:creator>
		<pubDate>Thu, 16 Jul 2020 16:03:51 +0000</pubDate>
				<category><![CDATA[Cyber security]]></category>
		<category><![CDATA[opinion]]></category>
		<guid isPermaLink="false">https://presplay.cloud/?p=12637</guid>

					<description><![CDATA[By Akinola Idris 2 minutes ago Cybersecurity protection for work at home employees Overnight, all over the world, users are staying at home – and working from home. This requires new devices and new software licenses – but what about support? In most organizations work-from-home has many positive consequences. The employees are working hard, and they continue evenings and&#8230;]]></description>
										<content:encoded><![CDATA[<p class="byline">By Akinola Idris <time class="no-wrap relative-date chunk" datetime="2020-07-16T13:16:09Z">2 minutes ago</time></p>
<p class="strapline">Cybersecurity protection for work at home employees</p>
<p><img decoding="async" src="https://beyondstandards.ieee.org/wp-content/uploads/2016/05/featured-cybersecurity.jpg" alt="Global, Open Standards for Cyber-security - IEEE SA Beyond Standards" /></p>
<p>Overnight, all over the world, users are staying at home – and working from home. This requires new devices and new software licenses – but what about support?</p>
<p>In most organizations work-from-home has many positive consequences. The employees are working hard, and they continue evenings and week-ends to get the job done. Even after the Corona-virus we now expect that work-from-home will continue on a much higher level than before the crisis. The hardware and software is in place, but the users still need support from the central IT service help desk.</p>
<aside class="hawk-placeholder" data-render-type="fte" data-skip="dealsy" data-widget-type="seasonal" data-widget-id="1225401515555656200" data-result="missing"></aside>
<p>Expect to see higher load, and tasks you can’t support today. Expect to see more load at unusual times like nights and weekends. If the support isn’t available, then the employees can’t do their job, and our companies will lose productivity.</p>
<p>It has also been reported that the cyber-criminals exploit the situation. It is more difficult for the central support team to verify if someone calling in is a real employee, or a criminal impersonating a real employee.</p>
<p><img decoding="async" src="https://images.techhive.com/images/article/2016/11/data-protection-100693437-large.jpg" alt="Cybersecurity outlook: data protection takes center stage | CSO Online" /></p>
<p>New tools are needed to get the support done without increasing the costs and risks for the organizations.</p>
<h2 id="more-calls">More calls</h2>
<p>Many users will have new experiences and surprises. How do I print, how do I scan, where is this service, etc.? With more calls comes increased stress for the service desk; queue time for users will increase and general productivity will deteriorate for the users and the IT-team.</p>
<div class="slot-double-height-9-488"></div>
<p>We hear of increases of the magnitude of 15% more calls. Look for tools to help the service desk, those that provide fast solutions through self-service, to see the total number of calls go down. On average in the industry 20% of all calls are password related. With an efficient user-friendly password self-service tool, then you can cut the number of calls back to the normal level.</p>
<p>The key to self-service success is that at least 80-90% of all users will and can use the self-service.</p>
<div class="slot-interscroller">
<div class="vptcwxppllgczuxq">
<div>
<div class="bordeaux-outer-box interscroller-10-10">
<div class="bordeaux-inner-box">
<div></div>
</div>
</div>
</div>
</div>
</div>
<h2 id="password-reset">Password reset</h2>
<p>One trivial incident will not change – users forget passwords and call for password recovery and resets. A simple question: “Can your service desk reset a password at a remote company PC?” Even when you use VPN for security, the service desk can’t access a “dead PC”.</p>
<div class="slot-double-height-7-528"></div>
<p>Today for most companies the only solution is for someone to transport the PC back to company premises, so that the PC-cache password can be synchronized with the AD-password, and the user can get the PC back. This is very expensive.</p>
<div class="slot-double-height-6-295"></div>
<p>What the users want is a solution where they, in a simple way, can verify their identity to a self-service, and then the PC works again. This will help the user carry on even when it happens in week-end or night time. There are out of the box solutions that solve this important technical issue.</p>
<h2 id="identity-verification">Identity verification</h2>
<p>When a criminal calls someone over the phone and impersonates another person to achieve something it is called vishing (Voice based phishing). When working from home this risk increases many folds.</p>
<div class="slot-double-height-4-1581"></div>
<p>New security issues: In “the old days,” people called the service desk from local company phones, meaning that we could verify who they were, but working from home doesn’t give us much with which to verify the user’s identity. If they ask for a password reset, how can we verify that the person is the user they claim to be?</p>
<div class="slot-double-height-3-1257"></div>
<p>We need a process, but even that is not enough. Social engineers, hackers, know how to use emotions to get the service desk supporters to help them. The emotions might be fear, greed or just the desire to be an empathetic person. This is what “Help desks” are for! To win this battle with the criminals we must introduce an IT-workflow that controls the entire verification process. This will take emotions out of the verification.</p>
<div class="slot-double-height-2-826">
<div class="vptcwxppllgczuxq"></div>
</div>
<p>Controlling the ID verification should include a verbal identity “test” by the service desk. Using knowledge about what computers the real employees’ use, the location they work from, and a lot of confidential information, the test questions will be impossible for a hacker to answer. For important employees, the test can even include approval from managers in the process. It takes more time to include new persons in the process, but the cost of failure is much higher.</p>
<div class="slot-double-height-1-354"></div>
<p>Just the above 3 changes to your support team will mean dissatisfied management and concerns from IT security management. It’s time to reduce the burden on the service desk and let users give themselves a better service than the service desk can. And don’t forget IT security – the hackers don’t take a break – they’re increasing their efforts right now.</p>
<h2 id="a-solution">A solution</h2>
<p>With work-from-home we see how companies struggle with costs and security issues. The solution (cloud or on-premise) – can immediately help companies reduce the load on the service desk, make employees more productive, and prevent criminals from using the service desk to get access to important applications.</p>
<p>The basic objectives of the solution should be to reduce calls to the service desk; reset passwords for remote PC’s through a company VPN and provide rock-solid, unbreakable, user identification.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://posts.presplay.cloud/working-from-home-demands-on-service-desk-and-it-cybersecurity/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Fighting cybercrime in a connected world</title>
		<link>https://posts.presplay.cloud/fighting-cybercrime-in-a-connected-world/</link>
					<comments>https://posts.presplay.cloud/fighting-cybercrime-in-a-connected-world/#respond</comments>
		
		<dc:creator><![CDATA[Presplay]]></dc:creator>
		<pubDate>Sat, 12 Oct 2019 00:05:58 +0000</pubDate>
				<category><![CDATA[Cyber security]]></category>
		<category><![CDATA[opinion]]></category>
		<guid isPermaLink="false">https://presplay.cloud/?p=4966</guid>

					<description><![CDATA[THE HAGUE, The Netherlands – In our increasingly interconnected world, the impacts of cybercrime can be far-reaching, fast moving and devastating to its victims. To address the challenges for police in preventing and investigating cybercrime globally, the 7th Europol-INTERPOL Cybercrime Conference brought together cyber experts from law enforcement, private industry, international organizations and academia for&#8230;]]></description>
										<content:encoded><![CDATA[<p style="font-weight: 400;">THE HAGUE, The Netherlands – In our increasingly interconnected world, the impacts of cybercrime can be far-reaching, fast moving and devastating to its victims.</p>
<p style="font-weight: 400;">To address the challenges for police in preventing and investigating cybercrime globally, the 7th Europol-INTERPOL Cybercrime Conference brought together cyber experts from law enforcement, private industry, international organizations and academia for in-depth discussions on the latest cyber threats, trends and strategies.</p>
<p style="font-weight: 400;">Under the theme of ‘Law enforcement in a connected future’, the three-day (9 – 11 October) conference focused on new developments in technology which could be exploited by criminals but also used to the benefit of police.</p>
<p><img decoding="async" src="https://www.interpol.int/var/interpol/storage/images/4/7/6/3/213674-1-eng-GB/Cyber-Conf-joint-opening-remarks.jpg" alt="Opening the 7th Europol-INTERPOL Cybercrime Conference." /></p>
<p style="font-weight: 400;">Key themes included the benefits and challenges of Artificial Intelligence for police; the potential impacts of 5G technology; cross-border access to electronic evidence; obstacles to international cooperation on cybercrime investigations; the importance of cyber capacity building; cryptocurrency trends and challenges; the use of open-source intelligence and privacy considerations.</p>
<p style="font-weight: 400;">With cybercriminals constantly evolving and transforming their tactics, INTERPOL’s Director of Cybercrime Craig Jones said the traditional model of policing is ‘being challenged like never before’.</p>
<p><img decoding="async" src="https://www.interpol.int/var/interpol/storage/images/7/7/6/3/213677-1-eng-GB/Craig-Jones-Dir-CD.jpg" alt="Craig-Jones-Dir-CD" /></p>
<p style="font-weight: 400;">“The cybercriminal world is agile and adapting, connecting and cooperating in ways we never imagined even just a few years ago,” said Mr Jones.</p>
<p style="font-weight: 400;">“Law enforcement must adapt to this ever-changing criminal environment in order to effectively protect our communities in the cyber domain,” he concluded.</p>
<p style="font-weight: 400;">During the opening ceremony, Mr Jones launched INTERPOL’s ‘#BECareful’ global public awareness campaign on business email compromise (BEC) fraud. The campaign, which will run for one month, will inform the public about this growing type of fraud and provide prevention tips for how to stay safe.</p>
<p style="font-weight: 400;">INTERPOL also presented the findings of its first cybercrime threat assessment during the conference. The report provides an analysis of the latest cybercrime trends identified in different regions using information provided by member countries, private partners and open source intelligence.</p>
<p style="font-weight: 400;">One trend identified is a shift from malware targeting computers to attacks targeting mobile devices, due to the fact that mobile devices are being used more and more frequently as payment platforms.</p>
<p style="font-weight: 400;">In response to a rise in cases of cryptojacking – where criminals remotely accesses victims’ system using malware to hijack their computing power to create cryptocurrency – INTERPOL has disseminated more than 170 Cyber Activity Reports providing recommendation for prevention and mitigation.</p>
<p style="font-weight: 400;">Steven Wilson, Head of Europol’s European Cybercrime Centre (EC3) said: “Three days of conference with partners from law enforcement, industry and academia have shown what we can achieve when we work closely together to tackle the global issue of cybercrime.”</p>
<p><span class="quote__text"><strong><em>“We must make progress in prevention, legislation, enforcement and prosecution.</em></strong>&#8220;</span><span class="quote__author">Steven Wilson, Head of Europol’s European Cybercrime Centre (EC3)</span></p>
<p style="font-weight: 400;">&#8220;All of these elements are necessary in order to disrupt organized crime activity and reduce the online threat to businesses, governments and, above all, EU citizens. I look forward to  building on our trusted relationships to deliver an improved international response to this ever increasing challenge,” added Mr Wilson.</p>
<p style="font-weight: 400;">The conference, which gathered some 400 delegates from 70 countries, also provides an opportunity for Europol and INTERPOL to reconfirm their strong commitment to continue their collaboration in the fight against cybercrime.</p>
<p style="font-weight: 400;">The Europol-INTERPOL Cybercrime Conference is a joint initiative launched in 2013. Held annually, it is hosted in alternate years by Europol and INTERPOL.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://posts.presplay.cloud/fighting-cybercrime-in-a-connected-world/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Database Migration from non-CDB to PDB – Using Data Pump</title>
		<link>https://posts.presplay.cloud/database-migration-from-non-cdb-to-pdb-using-data-pump/</link>
					<comments>https://posts.presplay.cloud/database-migration-from-non-cdb-to-pdb-using-data-pump/#respond</comments>
		
		<dc:creator><![CDATA[Presplay]]></dc:creator>
		<pubDate>Mon, 29 Jul 2019 15:29:00 +0000</pubDate>
				<category><![CDATA[opinion]]></category>
		<category><![CDATA[Oracle]]></category>
		<guid isPermaLink="false">https://presplay.cloud/?p=22521</guid>

					<description><![CDATA[Posted on July 29, 2019 by Akinola Idris You may have realized that there are a few techniques missing describing how to do a Database Migration from non-CDB to PDB – Migration with Data Pump is one of them. I will explain the most simple approach of going to Single- or Multitenant. It isn’t the coolest – and&#8230;]]></description>
										<content:encoded><![CDATA[<p><span class="posted-on">Posted on July 29, 2019</span> <span class="byline">by <a href="https://posts.presplay.cloud/database-migrati…-using-data-pump/">Akinola Idris</a></span></p>
<p>You may have realized that there are a few techniques missing describing how to do a <strong>Database Migration from non-CDB to PDB – Migration with Data Pump</strong> is one of them. I will explain the most simple approach of going to Single- or Multitenant. It isn’t the coolest – and it isn’t very fast as soon as your database has a significant size. But it is not complex. And it allows you to move even from very old versions directly into an Oracle 19c PDB – regardless of patch levels or source and destination platform.</p>
<p><img decoding="async" class="aligncenter" src="https://dohdatabase.files.wordpress.com/2020/12/xtts-perl-ftex-demo-env-overview.png" alt="How to Migrate a Database Using Full Transportable Export Import and Incremental Backups – Databases Are Fun" /></p>
<h3>High Level Overview</h3>
<table>
<tbody>
<tr>
<td><em>Endianness change possible:</em></td>
<td>Yes</td>
</tr>
<tr class="alt">
<td><em>Source database versions:</em></td>
<td>Oracle 10.1.0.2 or newer</td>
</tr>
<tr>
<td><em>Characteristic:</em></td>
<td>Direct migration into PDB</td>
</tr>
<tr class="alt">
<td><em>Upgrade necessary:</em></td>
<td>No, happens implicitly</td>
</tr>
<tr>
<td><em>Downtime:<br />
</em></td>
<td>Migration – mainly depending on size and complexity</td>
</tr>
<tr class="alt">
<td><em>Minimal downtime option(s):</em></td>
<td>Oracle GoldenGate</td>
</tr>
<tr>
<td><em>Process overview:</em></td>
<td>Export from source, import into destination – either via dump file or via Database Link</td>
</tr>
<tr class="alt">
<td><em>Fallback after plugin:<br />
</em></td>
<td>Data Pump – optional: Oracle GoldenGate</td>
</tr>
</tbody>
</table>
<h3>Database Migration from non-CDB to PDB – Migration with Data Pump</h3>
<p>Well, I think I don’t need to explain Oracle Data Pump to anybody. At the end of this blog post you will find a long list of links pointing to the documentation and various workarounds. The big advantages of using Data Pump to migrate from a non-CDB into a PDB are:</p>
<ul>
<li>Works with every version since Oracle 10.1.0.2</li>
<li>Works regardless of patch level</li>
<li>Does not require any upgrade</li>
<li>Works across all platforms</li>
<li>Works regardless of encryption</li>
<li>Allows multiple transformations</li>
</ul>
<p>But the disadvantages of Data Pump are obvious as well as the duration depends mostly on:</p>
<ul>
<li>Amount of data</li>
<li>Complexity of meta information</li>
<li>Special data types such as LONG and LOB</li>
</ul>
<p>I’d call Data Pump the most flexible approach but of course potentially also the slowest of all options.</p>
<h3>Process Overview</h3>
<p>Using Data Pump either allows you to export into a dump file, and import from this dump file afterwards.</p>
<p><img loading="lazy" decoding="async" class="aligncenter wp-image-5474 size-full" src="https://i2.wp.com/mikedietrichde.com/wp-content/uploads/2019/08/datapump-with-dumpfile.jpg?resize=740%2C241&amp;ssl=1" sizes="auto, (max-width: 740px) 100vw, 740px" srcset="https://i2.wp.com/mikedietrichde.com/wp-content/uploads/2019/08/datapump-with-dumpfile.jpg?w=840&amp;ssl=1 840w, https://i2.wp.com/mikedietrichde.com/wp-content/uploads/2019/08/datapump-with-dumpfile.jpg?resize=300%2C98&amp;ssl=1 300w, https://i2.wp.com/mikedietrichde.com/wp-content/uploads/2019/08/datapump-with-dumpfile.jpg?resize=768%2C250&amp;ssl=1 768w" alt="Database Migration from non-CDB to PDB – Migration with Data Pump" width="740" height="241" data-attachment-id="5474" data-permalink="https://mikedietrichde.com/2019/08/14/database-migration-from-non-cdb-to-pdb-migration-with-data-pump/datapump-with-dumpfile/" data-orig-file="https://i2.wp.com/mikedietrichde.com/wp-content/uploads/2019/08/datapump-with-dumpfile.jpg?fit=840%2C273&amp;ssl=1" data-orig-size="840,273" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;1&quot;}" data-image-title="datapump-with-dumpfile" data-image-description="" data-image-caption="" data-medium-file="https://i2.wp.com/mikedietrichde.com/wp-content/uploads/2019/08/datapump-with-dumpfile.jpg?fit=300%2C98&amp;ssl=1" data-large-file="https://i2.wp.com/mikedietrichde.com/wp-content/uploads/2019/08/datapump-with-dumpfile.jpg?fit=740%2C241&amp;ssl=1" data-recalc-dims="1" /></p>
<p>Or you setup a database link from destination to source, and run the import from the destination of the database link using the <code>NETWORK_LINK</code> parameter.</p>
<p><img loading="lazy" decoding="async" class="aligncenter wp-image-5475 size-full" src="https://i1.wp.com/mikedietrichde.com/wp-content/uploads/2019/08/datapump-with-networklink.png?resize=740%2C197&amp;ssl=1" sizes="auto, (max-width: 740px) 100vw, 740px" srcset="https://i1.wp.com/mikedietrichde.com/wp-content/uploads/2019/08/datapump-with-networklink.png?w=806&amp;ssl=1 806w, https://i1.wp.com/mikedietrichde.com/wp-content/uploads/2019/08/datapump-with-networklink.png?resize=300%2C80&amp;ssl=1 300w, https://i1.wp.com/mikedietrichde.com/wp-content/uploads/2019/08/datapump-with-networklink.png?resize=768%2C205&amp;ssl=1 768w" alt="Database Migration from non-CDB to PDB – Migration with Data Pump" width="740" height="197" data-attachment-id="5475" data-permalink="https://mikedietrichde.com/2019/08/14/database-migration-from-non-cdb-to-pdb-migration-with-data-pump/datapump-with-networklink/" data-orig-file="https://i1.wp.com/mikedietrichde.com/wp-content/uploads/2019/08/datapump-with-networklink.png?fit=806%2C215&amp;ssl=1" data-orig-size="806,215" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="datapump-with-networklink" data-image-description="" data-image-caption="" data-medium-file="https://i1.wp.com/mikedietrichde.com/wp-content/uploads/2019/08/datapump-with-networklink.png?fit=300%2C80&amp;ssl=1" data-large-file="https://i1.wp.com/mikedietrichde.com/wp-content/uploads/2019/08/datapump-with-networklink.png?fit=740%2C197&amp;ssl=1" data-recalc-dims="1" /></p>
<p>The advantage of using a database link is <em><strong>not</strong>-writing</em> a dump file which does not need to be copied over. But not all actions can run in parallel. Plus, not every data type is supported (<code>LONG</code> for instance until 12.2). And your limiting factor is always the source side as Data Pump implicitly calls <code>expdp</code> on the source side. To my experience this can be faster but especially when you work on the same storage or SAN, and you don’t have to move the dump file around, the first approach often works better.</p>
<h3>Some Best Practices</h3>
<p>There are some recommendations for both approaches:</p>
<ul>
<li>Always use a par file</li>
<li>For a consistent export, use either <code>FLASHBACK_TIME=SYSTIMESTAMP</code> or <code>CONSISTENT=Y</code> (since 11.2)</li>
<li>Always <code>EXCLUDE=STATISTICS</code> – regather stats in destination is faster, or transport with a <code>STATS</code> table from <code>DBMS_STATS</code></li>
<li>Set <code>METRICS=Y</code> and since 12.1, <code>LOGTIME=ALL</code></li>
<li>Use <code>PARALLEL=&lt;2x number of cpu cores&gt;</code>
<ul>
<li>Since Oracle 12.2, meta data gets exported in parallel – but not with <code>NETWORK_LINK</code></li>
</ul>
</li>
<li>Preallocate <code>STREAMS_POOL_SIZE=128M</code> (or in the range of 64M-256M)</li>
<li>BasicFile LOB (old 8i LOBs) are always slow
<ul>
<li>Use <code>LOB_STORAGE=SECUREFILE</code> to convert to SecureFile LOBs as part of the migration</li>
</ul>
</li>
</ul>
<h3>Fallback</h3>
<p>When you used Data Pump as a migration approach to move from non-CDB to PDB, then I don’t expect you to force a fast fallback scenario in case of failure. The important task is to use the <code>VERSION</code> parameter correctly when you export from the destination PDB. You need to set it to the source’s release in order to export in the format and with the contents, the (old) source will understand when you reimport. Make sure, there’s an empty database waiting in case fallback is important. And don’t cleanup your old source home too early.</p>
<p><img loading="lazy" decoding="async" class="aligncenter wp-image-5476 size-full" src="https://i2.wp.com/mikedietrichde.com/wp-content/uploads/2019/08/fallback01_datapump.jpg?resize=740%2C227&amp;ssl=1" sizes="auto, (max-width: 740px) 100vw, 740px" srcset="https://i2.wp.com/mikedietrichde.com/wp-content/uploads/2019/08/fallback01_datapump.jpg?w=822&amp;ssl=1 822w, https://i2.wp.com/mikedietrichde.com/wp-content/uploads/2019/08/fallback01_datapump.jpg?resize=300%2C92&amp;ssl=1 300w, https://i2.wp.com/mikedietrichde.com/wp-content/uploads/2019/08/fallback01_datapump.jpg?resize=768%2C235&amp;ssl=1 768w" alt="Database Migration from non-CDB to PDB – Migration with Data Pump" width="740" height="227" data-attachment-id="5476" data-permalink="https://mikedietrichde.com/2019/08/14/database-migration-from-non-cdb-to-pdb-migration-with-data-pump/fallback01_datapump/" data-orig-file="https://i2.wp.com/mikedietrichde.com/wp-content/uploads/2019/08/fallback01_datapump.jpg?fit=822%2C252&amp;ssl=1" data-orig-size="822,252" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;1&quot;}" data-image-title="fallback01_datapump" data-image-description="" data-image-caption="" data-medium-file="https://i2.wp.com/mikedietrichde.com/wp-content/uploads/2019/08/fallback01_datapump.jpg?fit=300%2C92&amp;ssl=1" data-large-file="https://i2.wp.com/mikedietrichde.com/wp-content/uploads/2019/08/fallback01_datapump.jpg?fit=740%2C227&amp;ssl=1" data-recalc-dims="1" /></p>
<p>Be aware of one major pitfall: The time zone version. As typically your source database has a lower time zone version than the destination, you can migrate “forward” (same or higher version) but not “backwards” (lower version). Hence, in case of fallback you most likely need to apply a DST Time Zone patch to the older home in order to allow Data Pump to import. And make sure you follow the supported configurations setup from <a href="https://support.oracle.com/CSP/main/article?cmd=show&amp;type=NOT&amp;doctype=PROBLEM&amp;id=553337.1">MOS Note:553337.1</a> carefully.</p>
<p>Unfortunately, the fallback strategy over a <code>NETWORK_LINK</code> does not work. Even though the below scenario looks promising, you’ll receive an error when you call imdp from the lower version over the DB Link. I’d assume that the VERSION parameter does not get propagated in a way to convince the expdp side to export in 11.2.0.4 format.</p>
<p><img loading="lazy" decoding="async" class="aligncenter wp-image-5477 size-full" src="https://i1.wp.com/mikedietrichde.com/wp-content/uploads/2019/08/fallback02_datapump.jpg?resize=740%2C210&amp;ssl=1" sizes="auto, (max-width: 740px) 100vw, 740px" srcset="https://i1.wp.com/mikedietrichde.com/wp-content/uploads/2019/08/fallback02_datapump.jpg?w=758&amp;ssl=1 758w, https://i1.wp.com/mikedietrichde.com/wp-content/uploads/2019/08/fallback02_datapump.jpg?resize=300%2C85&amp;ssl=1 300w" alt="Database Migration from non-CDB to PDB – Migration with Data Pump" width="740" height="210" data-attachment-id="5477" data-permalink="https://mikedietrichde.com/2019/08/14/database-migration-from-non-cdb-to-pdb-migration-with-data-pump/fallback02_datapump/" data-orig-file="https://i1.wp.com/mikedietrichde.com/wp-content/uploads/2019/08/fallback02_datapump.jpg?fit=758%2C215&amp;ssl=1" data-orig-size="758,215" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;1&quot;}" data-image-title="fallback02_datapump" data-image-description="" data-image-caption="" data-medium-file="https://i1.wp.com/mikedietrichde.com/wp-content/uploads/2019/08/fallback02_datapump.jpg?fit=300%2C85&amp;ssl=1" data-large-file="https://i1.wp.com/mikedietrichde.com/wp-content/uploads/2019/08/fallback02_datapump.jpg?fit=740%2C210&amp;ssl=1" data-recalc-dims="1" /></p>
<p>This will be the error you’ll receive:</p>
<pre>$ impdp system/oracle@ftex network_link=sourcedb version=11.2.0.4 tables=tab1 metrics=y exclude=statistics directory=mydir logfile=pdb2.log

Import: Release 11.2.0.4.0 - Production on Wed Aug 14 20:27:52 2019

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
ORA-39001: invalid argument value
ORA-39169: Local version of 11.2.0.4.0 cannot work with remote version of 19.0.0.0.0.</pre>
]]></content:encoded>
					
					<wfw:commentRss>https://posts.presplay.cloud/database-migration-from-non-cdb-to-pdb-using-data-pump/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
