Quantcast
Channel: SAP Adaptive Server Enterprise (SAP ASE) for Custom Applications
Viewing all 151 articles
Browse latest View live

Overview of Workload Analyzer


Overview of MemScale

$
0
0

SAP ASE 16.0 SP02 introduced MemScale option including performance-driven technologies. See this video for an overview of the features introduced. The video is also posted on YouTube at https://www.youtube.com/watch?feature=em-subs_digest&v=wAIPRbbrOdc&app=desktop

 

For official feature documentation, see http://help.sap.com/saphelp_ase1602/helpdata/en/96/81ed472d79420ea35212a48d952925/content.htm?frameset=/en/f3/f98e555cf04bd6b4248a29db3d9973/frameset.htm&current_toc=/en/f3/f98e555cf04bd6b4248a29db3d9973/plain.htm&node_id=5

SAP ASE Webcast: Workload Analyzer

$
0
0

Join us for a webcast to learn about the SAP ASE workload analyzer option, new in SP02. This new functionality allows you to capture, analyze and replay a production workload in a non-disruptive manner. We’ll show you how to capture a workload on your production system, replay it on a testing environment, and quickly analyze the impact that changes in configuration parameters may have on application performance. Use real-life scenarios to determine the optimal configuration for your database, ensuring that your new configuration choices will have a positive impact on your production environment.

 

Wednesday, October 28th

1:00 PM EDT

 

Register now: https://event.on24.com/eventRegistration/EventLobbyServlet?target=registration.jsp&eventid=1054176&sessionid=1&key=4E986C0E2080077CF0957F4B88DC1A21&sourcepage=register

SAP HANA Cloud Platform (HCP) – SAP Adaptive Server Enterprise Persistence Instance (Part 1)

$
0
0

About SAP Adaptive Server Enterprise (ASE)

 

SAP ASE is market leading database management system for Online Transaction Processing - it is a major part of the SAP Data Management Portfolio for end-to-end, real-time data management (as shown in Figure 1 below).

 

datamanagement.jpg

                                                                              Figure 1

 

SAP ASE is a leader (Figure 2) in Gartner’s Magic Quadrant for Operational Database Management System. SAP ASE has established a performance standard in the Sales and Distribution (SD) benchmark (http://news.sap.com/sap-sybase-adaptive-server-enterprise-scores-top-two-processor-and-four-processor-linux-performance-results/) with a #1 ranking on Linux. SAP ASE is a patented technology with multiple grants.


Traditionally SAP ASE has been powering mission critical environments across the industries and has been a mainstay in the Financial Services Industry. With version 15.7, SAP ASE has expanded its capabilities to the cloud – now business organizations have the flexibility of deploying SAP ASE workloads across on-premise and cloud environments to better manage their Total Cost of Owership (TCO).

 

 

aseleader.JPG

                                                                            Figure 2

 

What is the Cloud?


Gartner describes Cloud as (http://www.gartner.com/newsroom/id/1035013) a style of computing that is Service Based, Scalable and Elastic, Shared, Metered, Internet Technology Enabled, for External Customers.

 

There are multiple Cloud Reference Architectures (we reference the one from the National Institute of Standards and Technology - http://www.nist.gov/customcf/get_pdf.cfm?pub_id=909505  for our discussion) and from them the common theme is a set layered services abstractions such as IaaS (Infrastructure as a service which abstracts network, compute and storage resources), PaaS (Platform as a service which abstracts application scalability and elasticity), and SaaS (Software as a Service which abstracts Business Services), providing a framework for business on-demand.


In this article we will focus on the SAP ASE Persistence Instance (DBaaS, Database as a Service) offering on the SAP HANA Cloud PaaS Platform (as shown in Figure 3 below).

 

 

 

SAPHCPCloudFit.JPG

 

                                                                                                                Figure 3

 

 

SAP ASE Persistence Instance (DBaaS) Service Summary


Table 1 below provides a summary of this offering. As stated below the trial version (online self-serve is planned) and a production version is available today. The SAP ASE Persistence Instance is a Database as a Service (DbaaS) offering in the SAP HANA Cloud Platform PaaS Service.

 

 

Criteria
HANA Cloud Platform
Type           SAP ASE Persistence Instance Service (DBaaS, Database as a Service)
License ModelTrial (Available today, On-line self-serve planned), Productive use monthly subscription (Available today)
Version15.7 SP132
AccessOnline
Operating SystemSuse Linux Enterprise Server 11 SP3
Cloud Service(s)HCP Persistence Service (Database as a Service - DBaaS)

                                                                           Table 1

 

Getting the Production Use SAP ASE Persistence Instance (DBaaS) Version Today


The Account Executive (AE) is the interface to the successful provisioning of SAP ASE DBaaS Persistence Service on SAP HANA Cloud Platform. Figure 4 below describes this process.

 

getasedbaasprocess.JPG

 

                                                                                    Figure 4

 

NOTE The same process (Figure 4) will apply for a Trial version as well. A self-help trial access is planned from the SAP ASE Persistence Instance website (URL given below).

 


Resources

 

SAP ASE Persistence Instance (DBaaS) on SAP HANA Cloud Platform:

http://hcp.sap.com/capabilities/data-storage/ase-dbaas.html

 

SAP HANA Cloud Platform Documentation:

https://help.hana.ondemand.com/help/frameset.htm?533384eda57e428f98a43815e6a11119.html

Getting started after installing SAP ASE 16.0 SP02 on Windows

$
0
0

You can download SAP Adaptive Server Enterprise from the SAP Software Download Center http://https://support.sap.com/software.html?url_id=tile_download_software.

 

The steps below assume that you have chosen default settings during installation.

 

Start and Stop Servers

After you install your ASE on Windows, you can start and stop your server in your local services (just search for “services” on your computer and click on “View local services”).


Or you can start the server using RUN_<server_name> files.

Access SQL Console

There are two ways to access the SQL console.

You can log in to the isql console from the command prompt using “isql –U<username> -P<password> -S<your_server_name>”. The default user is "sa".

You’ll see 1> in the next line if login is successful.

 

Try making a simple query to the sample database pubs2:

Use “exit” to exit the isql console.

 

For Interactive SQL console, open Interactive SQL app or run dbisql.exe from SAP\DBISQL-16_0\bin.

To test it, select a sample database in the top right corner and make a simple query. Click the play button to see query results.


 

Logging in to Cockpit

To access cockpit (a web-based management and monitoring tool), make sure that COCKPIT-4 is running in your local services (it should have been started automatically by the installer). If it's not running start it from your local services or by running cockpit.bat from SAP/COCKPIT-4/bin.

Then, in your browser, type https://<hostname>:4283/cockpit. This assumes that during installation you chose the default port 4283.

You'll need to continue to the website, even if your browser doesn't recommend it. The default user is "tech_user".

After you log in, you should see the "Monitor" screen:

Captu34re.PNG

 

Leave a comment if this post was helpful or to let me know how I can improve it.

Always On? Sybase? You must be kidding...

$
0
0

Well, [SAP] Sybase has always been well known (and coveted) for its replication technology.  Sybase Replication Server is a well established, rock-solidly stable product allowing homo- or heterogeneous replication solutions or different types and topologies across most platforms.  But as opposed to other DBMS vendors (say Microsoft?) the solution was always kept separate from ASE server.

 

It looks like SAP has decided to change this at last.  ASE 16 SP02 has finally bundled replication server into the core installation package - that is if you purchase the HADR ASE option...

 

What's in the package?  Let's have a closer look.  The last ISUG issue has a brief description of the solution, but since I have not found an open reference to this document and since the only other option is to read official product documentation I decided to summarize it here for those who do not have access to it.

 

ASE 16 SP02 came out with quite a few enhancements - Always On falls under the "Availability Enhancement" hood.  As the option name suggests Always On has both HA & DR capabilities to choose from.  It promises:

 

  • Zero data loss (HA)
  • Transparent Client fail-over
  • Planned/Unplanned fail-over support
  • Soft DB quiesce for planned fail-over without interrupting applications
  • Zero downtime upgrades for minor & major DB releases
  • Automated fault detection and handling
  • Ability to leverage replicate DB in enforced read-only mode with zero user administration

 

Replication server has most of these capabilities in the past and indeed it is the infrastructure for the Always On option.  Yet, having them all bundled together is indeed a huge step forward.  I love in particular the transparent fail-over and automated fault detection - forced read only sounds nice too.

 

As said - the option comes in two configurations:  HA & DR. 

 

HA configuration encompasses two database servers tied in synchronous data replication mode (apparently relying on RS 157 SP300 capabilities of synchronous replication) - a pair of ASEs with a pair of dedicated RS applying transaction directly to the companion side with only one node being active at a time:

 

HA.JPG

DR configuration replication mode is a-synchronous (although ISUG specifies otherwise - a typo?..).  This is a familiar topology we have been used with Sybase Replication Server for years - a pair of ASEs each with a dedicated RS connected through a direct route to speed up message delivery to the companion side:

 

DR.JPG

I am very curious how simple the management of HADR option is and how scalable is the topology behind it.  I've been busy testing replication server for the last couple of months (replicating from an old ASE 15.x release to a new ASE 16 SP01PL2 in either direction).  Although the setup part is pretty easy, one of the hurdles I have been stumbling on was the volume of data transferred between various RS components.  In particular the pressure on the inbound queue.  In the HA configuration above the RS responsible for applying transaction to the companion ASE seems to be sitting close to the companion ASE rather than the primary ASE. 

 

As of today when replication agent follows transactions from the primary log to the inbound queue it translates each update/delete command into a command listing all of the primary table columns.  For wide tables this impacts the volume of data moved across network pretty badly (we've lodged CR to deal with this - CR791846).  As minimal column support kicks in only for the outbound queue I am very curious to see what the out-of-the-box HA implementation is able to achieve (unless it is based on stream replication rather than transaction replication).  In the testing I have been performing any latency introduced to the chain RA - inQ was threatening to blow the primary ASE transaction log - the nightmare for any DBA messing up with replication server.  And I have not been testing synchronous replication even!  I also wonder if HADR option includes various ASO enhancements introduced into RS for the past couple of years.

 

In short, HADR is our there - love it or leave it.

 

I hope we will go with the former but only time will tell...

 

Have fun,

 

ATM.

 

ps. I wonder what are the pricing differences between HADR option vs. RS CORE (+ASO).

Always On? Sybase? You must be kidding...

$
0
0

Well, [SAP] Sybase has always been well known (and coveted) for its replication technology.  Sybase Replication Server is a well established, rock-solidly stable product allowing homo- or heterogeneous replication solutions or different types and topologies across most platforms.  But as opposed to other DBMS vendors (say Microsoft?) the solution was always kept separate from ASE server.

 

It looks like SAP has decided to change this at last.  ASE 16 SP02 has finally bundled replication server into the core installation package - that is if you purchase the HADR ASE option...

 

What's in the package?  Let's have a closer look.  The last ISUG issue has a brief description of the solution, but since I have not found an open reference to this document and since the only other option is to read official product documentation I decided to summarize it here for those who do not have access to it.

 

ASE 16 SP02 came out with quite a few enhancements - Always On falls under the "Availability Enhancement" hood.  As the option name suggests Always On has both HA & DR capabilities to choose from.  It promises:

 

  • Zero data loss (HA)
  • Transparent Client fail-over
  • Planned/Unplanned fail-over support
  • Soft DB quiesce for planned fail-over without interrupting applications
  • Zero downtime upgrades for minor & major DB releases
  • Automated fault detection and handling
  • Ability to leverage replicate DB in enforced read-only mode with zero user administration

 

Replication server has most of these capabilities in the past and indeed it is the infrastructure for the Always On option.  Yet, having them all bundled together is indeed a huge step forward.  I love in particular the transparent fail-over and automated fault detection - forced read only sounds nice too.

 

As said - the option comes in two configurations:  HA & DR. 

 

HA configuration encompasses two database servers tied in synchronous data replication mode (apparently relying on RS 157 SP300 capabilities of synchronous replication) - a pair of ASEs with a pair of dedicated RS applying transaction directly to the companion side with only one node being active at a time:

 

HA.JPG

DR configuration replication mode is a-synchronous (although ISUG specifies otherwise - a typo?..).  This is a familiar topology we have been used with Sybase Replication Server for years - a pair of ASEs each with a dedicated RS connected through a direct route to speed up message delivery to the companion side:

 

DR.JPG

I am very curious how simple the management of HADR option is and how scalable is the topology behind it.  I've been busy testing replication server for the last couple of months (replicating from an old ASE 15.x release to a new ASE 16 SP01PL2 in either direction).  Although the setup part is pretty easy, one of the hurdles I have been stumbling on was the volume of data transferred between various RS components.  In particular the pressure on the inbound queue.  In the HA configuration above the RS responsible for applying transaction to the companion ASE seems to be sitting close to the companion ASE rather than the primary ASE. 

 

As of today when replication agent follows transactions from the primary log to the inbound queue it translates each update/delete command into a command listing all of the primary table columns.  For wide tables this impacts the volume of data moved across network pretty badly (we've lodged CR to deal with this - CR791846).  As minimal column support kicks in only for the outbound queue I am very curious to see what the out-of-the-box HA implementation is able to achieve (unless it is based on stream replication rather than transaction replication).  In the testing I have been performing any latency introduced to the chain RA - inQ was threatening to blow the primary ASE transaction log - the nightmare for any DBA messing up with replication server.  And I have not been testing synchronous replication even!  I also wonder if HADR option includes various ASO enhancements introduced into RS for the past couple of years.

 

In short, HADR is our there - love it or leave it.

 

I hope we will go with the former but only time will tell...

 

Have fun,

 

ATM.

 

ps. I wonder what are the pricing differences between HADR option vs. RS CORE (+ASO).

execute as owner restricted

$
0
0

Since ASE 15.7 it's possible to create stored procedures with option "execute as owner"

 

Which is quite a nice feature, e.g. to allow a user to unlock a login without having sso_role, you simply create a procedure like this

 

create procedure sp__locklogin  @login  varchar(30) = null, @action varchar(10) = null

with execute as owner

as  exec sp_locklogin @login, @action

 

Normally you would use the sa login to create this procedure in sybsystemprocs and grant execute permissions to a user defined role.

Grant the user defined role to a login, and that login can lock or unlock any other login wihout having sso_role

 

However, this feature presented a big security loop hole....

Just having a dbo alias in database sybsystemprocs allows you to create any procedure with execute as owner and executing it as the sa login, e.g.

create procedure sp_myproc  @cmd varchar(500)

with execute as owner

as  exec (@cmd)

 

With the latest patch level SP136 for ASE 15.7 (and latest patch level for 16.0) this security hole has been partly fixed. (See KB 2202914)

When impersonating the database owner, via a dbo alias or via setuser, it's not possible anymore to refer to objects outside of the database when using option "with execute as owner". Instead you'll have to create the object with the database owner itself.

(Creating the proc sp_myproc, executing any SQL statement as login sa, is still allowed)

 

Example of what will fail

use a login having sa_role (not sa), which gets dbo alias in sybsystemprocs

 

create proc sp__listen

with execute as owner

as select * from master..syslisteners

go

Msg 16367, Level 16, State 1:

Server 'ASE157', Procedure 'sp__listen', Line 1:

EXECUTE AS OWNER procedures cannot be created by an alias or with the SETUSER command when the procedure has SQL statements hat reference objects across databases. Create the procedure from a non-impersonated session.

 

The only way to get this procedure created is using the sa login itself.

(Of course the same result can be achieved by executing sp_listener from sp__listen, but that's not the point)

 

Error 16367 is also produced at other times, e.g. when running installmaster.

This must be executed using the sa login, because several procedures for sp_sysmon rely on this feature (KB 2183652) and can now only be installed by sa

 

In many environments the sa login is locked out for security reasons

Do you want to unlock the sa login just to deploy a stored procedure in sybsystemprocs?


Ordering by column position generates error if called from within SP - ASE 16.0.1.3

$
0
0

The following code:


create table test1 (a int, b int, c int)

create table test2 (a int, b int, c int)


create proc sp_test

as

begin

select a.a, a.b from test1 a where a.a > 1 and a.a is not null

and exists (select 1 from test2 b where b.a = a.a)

order by 1, 2

return 0

end

 

will work on ASE versions 15, 16.0.1.2, 16.0.2.2 but not on 16.0.1.3:

 

Msg 207, Level 16, State 4:

Server 'XXX', Procedure 'sp_test', Line 4:

Invalid column name 'a'.

Msg 207, Level 16, State 4:

Server 'XXX', Procedure 'sp_test', Line 4:

Invalid column name 'a'.

1>

  1. Depending on how many conditions are in the where clause
  2. Depending whether exists is present
  3. Depending it sorting is done by position

 

Did anyone come across this? 


The same statement run from outside SP will work correctly:


select a.a, a.b from test1 a where a.a > 1 and a.a is not null

and exists (select 1 from test2 b where b.a = a.a)

order by 1, 2


a          b

----------- -----------


(0 rows affected)


Took me a while to understand what the problem is....

 

HTH,

 

Andrew

Fast First Row optimization goal result in table scanning

$
0
0

Anyone experienced this issue before?  Setting the optimization goal to fast first row result rather than in getting the first rows faster in optimizer choosing to table scan some of the tables and than building dynamic indices...  Statistics accurate and in place.  Indexed perfectly.  Working pretty good without this goal - wanted to use this goal to work on larger result sets in application without getting application timeout message.  Got the opposite effect instead...

 

Any ideas?

 

Cheers.

 

AM

The provider 'sybcsi_profiler' was specified in the active configuration but it is not available for loading

$
0
0

Anyone came across this issue before? 

 

I've downgraded ASE 16.0.1.3 to ASE 16.0.1.2.  All fine except for XP server that can no longer function:

 

03/08/16 09:02:17 AM  XP Server is now running.

00:0006:00000:00696:2016/03/08 09:02:29.51 kernel  XP Server started successfully.

Sybase CSI Error Message: The provider 'sybcsi_profiler' was specified in the active configuration but it is not available for loading.

Failed to perform hash digest on seed.

00:0006:00000:00696:2016/03/08 09:02:29.52 server  Error: 7221, Severity: 14, State: 1

00:0006:00000:00696:2016/03/08 09:02:29.52 server  Login to site 'ASE_XP' failed.

Failed to perform hash digest on seed.

 

Could not find any reference to this anywhere.  Case opened - but in case anyone can speed up resolution it will be greatly appreciated.  Reinstalling XP server did not help.  Probably somewhere in ASE sits wrong configuration?

 

Cheers,

 

AM

Webcast: Spotlight on Financial Services with Calypso

$
0
0

Join us for a special webcast.


Enable consolidation, simplification, and growth with SAP Adaptive Service Enterprise and Calypso


Date: Tuesday, March 15, 2016

Time: 9:00 a.m. PST

Event Registration


Calypso, a leading provider of front-to-back technology solutions for financial markets, is used bymore than 200 financial firms and 34,000 capital markets professionals. When running with SAP Adaptive Server Enterprise, Calypso can help your organization reduce the total number of systems in use, simplify business architecture, streamline processes, and improve efficiency – all while reducing total cost of ownership.

Join this Webcast on March 15 to find out how Calypso and SAP Adaptive Server Enterprise will enable your business to:

  • Run on a single integrated platform
  • Meet your business and technical SLAs
  • Expand usage into new in-memory models

Register now to attend this session, and discover how SAP Adaptive Server Enterprise and Calypso can support innovation today and in the future.

Webcast: Calypso Solutions & SAP Adaptive Server Enterprise

$
0
0

Enabling consolidation, simplification, and growth with SAP Adaptive Service Enterprise and Calypso


Date: Tuesday, March 15, 2016


Time: 08:00 AM PDT


https://event.on24.com/eventRegistration/EventLobbyServlet?target=reg20.jsp&referrer=&eventid=1148996&sessionid=1&key=857802A169ADE8214D218DDDE2C153F4&regTag=&sourcepage=register

Calypso, a leading provider of front-to-back technology solutions for financial markets, is used by more than 200 financial firms and 34,000 capital markets professionals. And when running with SAP Adaptive Server Enterprise, Calypso can help your organization reduce the total number of systems in use, simplify business architecture, streamline processes, and improve efficiency – all while reducing total cost of ownership.

We will discuss how Calypso and SAP Adaptive Server Enterprise will enable your business to:

  • Run on a single integrated platform
  • Meet your business and technical SLA’s
  • Expand usage into new in-memory models

 

Please join us for a webcast with Talat Sadiq, Mayank Shah and Vikram Pradhan, from Calypso, and Ashok Swaminathan, Senior Director, Product Management at SAP, to learn how this leading Capital Markets STP provider uses SAP ASE to power their mission critical trading and risk platforms.


Register now: https://event.on24.com/eventRegistration/EventLobbyServlet?target=reg20.jsp&referrer=&eventid=1148996&sessionid=1&key=857802A169ADE8214D218DDDE2C153F4&regTag=&sourcepage=register

Why You Should Strive to Get Hold of Platform Edition

$
0
0

As many of you probably know, SAP has changed quite a bit the way ASE product is licensed.  Those who do probably remember this image - released about a year ago by SAP.

 

ASE_EDITIONS.jpg

 

Platform & Edge editions are quite a nice addition to ASE license basket.

 

However, a thing not so straightforward - although no less attractive - with Platform edition is that it comes with IQ & RS cores interchangeable with ASE cores.

 

A thing even less straightforward is that... RS comes with ASO and HVAR license included - and IQ comes with VLDB and ASO packed in!

 

Now, I do not know what you thing, but I think this is awesome.  IMO this pretty much obsoletes requesting separate ASE/RS/IQ licenses (unless there is a specific feature not covered by PE - e.g. IMDB/ECDA &c).

 

I'm loving it.  Hope you too...

 

Cheers,

 

AM.

SAP and ISUG-TECH host SAP ASE Meet-ups in a city near you

$
0
0

SAP and ISUG-TECH host SAP ASE Meet-ups in a city near you

Register now


SAP and ISUG-TECH have teamed up, and we're hitting the road this spring to meet you in person and share the latest innovations and insights on SAP ASE. Join us for the unique opportunity to:


  • Learn how SAP ASE is helping businesses improve performance, reliability, and efficiency
  • See what's new and improved about SAP ASE
  • Discuss solution best practices
  • Connect, socialize, and share ideas with your peers


You don’t want to miss this insightful event. Register nowto join us. 

 

Slide1.JPG


Date-time Adjustment for Workload Replay in SAP ASE 16.0 SP02 PL03

$
0
0

SAP ASE 16.0 SP02 PL03 introduces date and time adjustment for workload replay (workload analyzer option). You can now configure the SAP ASE cockpit so that when the replay begins, the date and time of the replay SAP ASE server are set to the time at which the capture originally started.

 

By default, the date and time on the replay SAP ASE server is the actual date and time at which the replay occurs. To reset the server time, in the Replay Wizard, check the following box on the "Options" page:

image1.png

The choice will be reflected on the "Summary" page and in the "Replay Settings":

 

image2.png

image3.png

See user documentation at  http://help.sap.com/saphelp_ase1602/helpdata/en/88/6af0ec2f7344499a0946cc022c9969/content.htm?frameset=/en/30/c1b3ab4dff4b5a985cc5262e8d804f/frameset.htm&current_toc=/en/1a/f592d6ac631014a7878b4f7b67c2cc/plain.htm&node_id=525.

Where are my transactions? A puzzle for the curious...

$
0
0

I am in the midst of the preparation for a major ASE upgrade.  To control downtime better in a 24/7 environment this one will rely on replication server.  As part of testing we have a batch processing running a heavy load on the DB - the same that will be run in the real environment.  It is a multi-process application submitting various concurrent calls - dml & sp - to a DB.  The batching is performed on a database connected to the replication server (MSA) and is performed on upgraded (ASE 16) or current (ASE 15) versions of the same server to test both ASE and RS performance in each case.

 

So the setting are:  one batch processing, a pair of ASEs (16 & 15), one replication server (latest release), MSA, transactions submitted only at the primary ASE side, the application is switched from one ASE to the other to test and tune the topology with both versions of ASEs serving as primary.

 

Thus far all is good (with lots of rock climbing done so far).

 

Having distilled the tests and the topology to the final stage I've started to get row count mismatch errors between source and the target for one of my DSIs.  Apparently these were due to particular application design / missing unique indices on the primary (set rowcount N + DML on the primary may affect different result set on the replicate - hence the need for unique indices for RS).  To test this I've reconfigured the RS to ignore row count differences and set up a procedure to compare all the source/target tables to have at least the same row count.

 

Here is where the fun began.

 

Unless one installs data assurance and configures it to verify primary and replicate are in sync all the time few will test mismatches between primary and the replicate ASEs.  Usually one relies for this on the replication server - if the rep server log is clear and DSIs work smooth most probably all is fine.  Although comparing row to row primary to replicate misses the point of updates performed on primary tables without unique indices with row count validation turned off (and no autcorrection turned on) it does give and approximate state of synchronization of the ASEs linked to a rep server.  If all the rows on the primary are there on the replicate for all tables most probably all is fine again.

 

Testing the replication from ASE 15 to ASE 16 with my batching application with no row count validation turned on proved to be effective - no row count mismatches bringing DSI down and no row count mismatches post processing.  Sort of cool.  Testing the replication from ASE 16  to ASE 15 with the same settings and under the same batch processing all of a sudden revealed a table with consistent mismatch in rows between the primary and the replicate.  What's more - the table with row mismatch was the one having unique index AND post processing replicate side had MORE rows than the primary.

 

Hm.  Definitely did not expect that to happen.  Sounds like I'm losing some of my transactions somewhere?  Is there a hole in the replication server???  Reluctantly I turned the row count validation back on - knowing that for this table this will have no effect (can cause having less rows on the replicate but definitely not more rows as a result of set rowcount N on the primary).  Rerun the test.   Nope.  No change - the same difference.  Started to scratch the forehead harder...  Concurrency?   Consistency?  Parallelism?  Turned to TS for additional clues...

 

My RS is configured to run in the old-fashioned parallel_dsi mode with serialization method set to no_wait and partitioning rule set to origin_sessionid (to compensate to N-to-1 throughput killer with single-threaded DSI and bulk transactions split/rerun with no partitioning).  In theory (and based on documentation) this should not cause any consistency issues between the primary and the replicate.  This is now my second major suspect...

 

While I'm waiting for the test to kick off (takes time to reset and rerun) I was wondering - any of you guys might have messed with these things?  Any additional ideas how to identify from which hole in the rep server my transactions are leaking out?  It would be far from lying to say that I feel a bit dizzy about this situation.  I love my transactions.  I would definitely want to see all of them on the other side of the bridge...

 

All I've done for now is to revert to default serialization method, no partitioning and added DML triggers on the primary table to see DML order for transactions submitted on the primary. 

 

Cheers,

 

 

Andrew

SADPHIRE NOW - Orlando, Florida, 2016...

$
0
0

To be honest, this is the first time when I've been frustrated much much more than excited looking through the session catalogue of the forthcoming great SAP technology event  (you may see it here if you want to be frustrated too:  https://sessioncatalog.sapevents.com/go/agendabuilder.sessions/?l=130&locale=en_US).

 

Please tell me that I'm wrong - tried to find sessions on ASE - found none (search engine error I hope).  Made me upset - but not really surprised as I've got used to this rather methodical neglect.

 

What made be really upset is when I came across this session:

 

ES34650In-Memory Database Proof Points for Customers with Oracle 12cTheater Presentation
In-memory database processing allows customers to achieve high performance with analytical types of queries. The Oracle implementation of an in-memory database provides customers with the performance expected in a way that is easy to implement, transparent to the user, and fully integrated with all other databases features.

 

 

Are you serious?  Oracle has at last released a flavour of in-memory database that approaches ASE IMDB (with a delay of ... a decade?).

 

SAP gives Oracle the platform to talk about this ground-breaking step forward and keeps philosophically quiet about "things important in life".  What about ASE IMDB "proof points?"  Are there any?  How many of SAP users know  that the platform exists in ASE long before it has been introduced into Oracle?  Will it continue to be buried behind the gloss of the real SAP favourite - HANA - to the extend that ASUG technology summit will advertise Oracle products and meticulously avoid talking about similar (better? more mature?) products from its ex-Sybase factory - because it threatens to obscure the gloss of its favourite bright kid (737 sessions out of 1560 somehow related to HANA and 0 to ASE)?

 

I've come across this rather distressing negligence about a year ago when attending Melbourne SAUG conference.  Oracle was there - talking gleefully how Oracle has much better chances than any of its competitors (SAP inclusive) to offer real in-memory DBMS platform.  To me this looks like a cancerous development in the soft tissue of SAP marketing brain.  The problem is - no one seems to care.

 

Not sure what to say.

 

I feel sad.

 

Andrew

Get Rid of the Crutches – Right Size Proc Cache

$
0
0

Don’t get me wrong – crutches are useful things.  But some of places still are hobbling around on crutches long after they should not be. What crutches am I referring to??

 

  • Dbcc proc_cache(free_unused)
  • Trace flag 753 (and 757)

 

Now, before some well meaning but misguided person assaults me for being anti-crutches – I, of course, recognize the need for them…at a point in time.  But when you break a leg, the point is you go to the doctor, get it fixed and then only use the crutches until it heals.  What you don’t do after breaking your leg is to simply dig out a pair of crutches and use them for the rest of your life without ever getting your leg fixed.  That just isn’t natural.

 

And yet there are places that run with the above in place….for years.  Years of pain unnecessarily as both the above point to proc cache being undersized.

 

So how do we correctly size it??

 

First, you have to understand the biggest consumers.  Don’t jump in with “stored procedures (and triggers)” right yet ….for example, most SAP Business Suite systems run with multiple gigabytes of proc cache and there isn’t a stored procedure to be found.  If you look in monProcedureCacheModuleUsage, you will see there are nearly 30 different allocators of procedure cache, including:

  • Parser
  • Utilities
  • Diagnostics
  • Optimizer
  • Execution
  • Access
  • Backup
  • Recovery
  • Replication
  • Procedural Objects
  • Sort
  • HK GC
  • HK Chores
  • BLOB Management
  • Partition Conditions
  • Pdes Local HashTab
  • Statement Cache
  • CIS
  • Frame Management
  • AuxBuf Management
  • Network
  • Procmem Control
  • Data change
  • Dynamic SQL
  • Cluster Threshold Manager

 

Of these, the top most common consumers typically are:

  • Optimizer
  • Execution
  • Procedural Objects
  • Statement Cache
  • Dynamic SQL

 

With a few others sometimes contributing, such as

  • Parser
  • Utilities
  • Access
  • Sort
  • Procmem Control

 

The two culprits that I think contribute the most to customers relying on the above crutches is grossly underestimating the proc cache requirements of the Optimizer+Execution and Statement Cache.  The latter, is often directly due, I think, to the former – so let’s spend some time there first.

 

Every query – whether a simple “select * from table” to a complex proc – uses procedure cache.Every query.  Get used to it.  It starts when the query is first received and the SQL is parsed.  Thankfully, this tends to be not that much – just a few pages of proc cache.  Then we get to optimization….and OUCH!!  This is a heavy consumer of procedure cache.  The typical query optimization plan can run from a few low 10’s of KB for a simplistic “select * from table” to multiple 10’s of MB.  This would also be true for simple inserts, updates or deletes using literal values.  However, once joins start, this quickly jumps to ~200KB.  More complicated queries such as large joins/nested subqueries can easily consume a few MB of proc cache.

 

But wait.  Remember how we got that plan.  First the optimizer had to develop a number of “work plans” as possible alternatives.  Each of these as developed are kept in memory until the final costing. The number of work plans depends upon the number of tables involved as well as the number of indexes and predicates in the where clause – but it easily can be 10-20.  Consequently, a single query optimization can use 2-4MB of procedure cache during optimization – and then drop down to 200KB once the final plan is chosen.

 

But then we are not done yet.  Those familiar with 16sp02’s compiled query feature understand that once a query optimization plan is chosen, it is not the end.  Nope.  The query execution engine needs to develop an execution plan on how it is going to execute the physical operators in the query plan.  For example, if a hash-based grouping is chosen, it needs to create the hash buckets (the number of which depends on the estimation), etc.  Net result is likely that 200KB query plan needs another 200KB of proc cache for the execution plan.  This is why if you pay much attention to monProcedureCacheModuleUsage that Optimizer and Execution are frequently consuming proc cache.

 

That is just for a basic query with a 2-3 way join and a few predicates.

 

Now, take that 6000 line stored procedure, which, likely contains a few hundred of these. Yep.  Now we are talking 10’s if not 100MB of proc cache.  But usually, most well developed procs need about 2MB.  And remember, that is just for one copy.  With each concurrent execution, yet another copy of the plan is loaded.  Of course, remember – what we have loaded is just the procedure optimization plan – we still need an execution plan for each statement.

 

How can we use this for sizing?  The answer is simple – how many concurrent queries is your system experiencing on a regular basis?  A good estimation if you are not sure is to simply figure 10x the number of engines you are running.  So, on a 30 engine system, it is likely there are 300 queries active at any point in time. If we figure that each concurrent query likely needs about 2MB of proc cache as a very loose estimate – then we are saying we need about 600MB of proc cache just in query optimization and execution – which is not unlikely.  We also know that it will take probably 500MB of proc cache to hold 250 normal stored procs. Each statement in statement cache will need another ~100KB for just the statement – and each plan for that statement will need memory as well…..  Ohhhh…..the math…it just hurts the head.

 

The better method is to simply think about the number of concurrent active sessions.  Now, if you are the type of shop that runs with 10000 connections with most of those idle at any point in time, you might want to start monitoring monProcess periodically and then run a query against the collected data similar to:

 

select SampleTime, count(*)

  from monProcess_hist

  where WaitEventID!=250

group by SampleTime

order by 2 desc

 

For grins, let’s say the answer is 1000.  Now we need to know how many are running stored procs on average vs. executing queries. All we need to do is to add a where clause of LineNumber>1 to the above query (yes – a batch can have more than one line….but then batch/stored proc – nearly same thing when it comes to proc cache for optimization).  Again, let’s say the answer is 100 (nice 10%).

 

Now that we know our concurrency, we have to figure how much proc cache per user.  Remember, in addition to currently executing queries, we need proc cache for statements in statement cache (in addition to the statement cache itself) and we need to keep some of our most frequently executed procs in cache.  The latter – if we assume our 10% is hitting – may already be counted, but let’s use the following estimates:

 

  • Each query needs 200KB of proc cache for optimizion – plus same for execution
  • Each user will cache ~15 statements (between statement cache or Dynamic SQL – take your pick).  Each statement needs 200KB for the plan.
  • Each proc will need 2MB of proc cache – and we want to keep ~250 or so in cache (including all the concurrent plans)

 

We then can compute:

  • 900 queries * 200KB per query * 2 (opt + exec) = 360MB for queries
  • 100 procs * 2MB per proc * 2 (opt + exec) = 400MB for proc based on concurrency
  • 250 procs in cache * 2MB per proc = 500 MB for proc cache (bigger – we will use this number)
  • 1000 users * 15 statements/user * 200KB/statement=3GB

 

…..so, with just some back of the envelope planning, we can see that we would need >4GB of total proc cache.  Let’s separate out statement cache – each 1GB of statement cache can hold 15000 statements (without the plans), so really we are talking 1GB of statement cache and 3.5GB of proc cache (for the procs, the queries and the statement cache plans).

 

At this point, mistakes are made.  The typical DBA might set proc cache at 4GB and statement cache at 1GB and think the job is done.  And then complains to support about proc cache spinlocks …..and support gives them the crutch of dbcc proc_cache(free_unused) and the problem goes away…not really.  They have to run it every 30 minutes or whatever….so they are treating the symptom but not the problem.  The real issue is that our MINIMUM requirements for procedure cache is 4.5GB. In reality, when ASE boots, it allocates procedure cache and then takes 50% of the FREE pages and allocates them across the engines to the Engine Local Cache (ELC).  This ELC is where we grab proc cache buffers from when doing optimization or execution without having to grab from the common pool – thus avoiding the spinlock.  However, if all the available proc cache is used – or nearly so – then there is nothing left for the ELC.  As a result, when an engine needs more proc cache, it has to go to the global pool. Now, if memory is tight, it is likely more fragmented as we likely have been doing a lot of proc cache turnovers – and attempts to do large (16KB) allocations fail and so we retry with 2KB allocations….and since this works, the next thing we know we are happily running with TF753 (disables large allocations) and periodic dbcc proc_cache(free_unused) and heaven help the poor fool that suggests we simply haven’t sized things correctly.

 

But you haven’t.  The least little bump….like a more complicated query such as those EOM reports….and proc cache is exhausted.  What happens then is an engine sees it has nothing in ELC, requests memory from the global pool – finds there is none – and then ASE simply tells every engine to stop what it is doing and flush any free ELC proc buffers back to the global pool so that engine 1 can do it’s query.  That hurts. Mostly because as soon as engine 1 is done, engine 2 now is screaming for proc cache and we get to repeat the debacle.  And the proc cache spinlock heads into warp speed.

 

The answer – you guessed it – take your computed proc cache and double it.  So, if you think you need 4GB of proc cache – configure for 8GB. Then if your proc cache spinlock is still high it is likely due to another reason like frequent turn over (e.g. exec with recompile) or other cause.  But at least you can get off the crutch of dbcc proc_cache(free_unused) and TF753.

 

It makes me shudder every time I hear some supposed internet guru telling a customer to use them….without addressing the real problem.  By all means - use the crutch - but get the leg fixed and then ditch them.  Right size your proc cache.

OMG! I run into log suspend again! What do I do now???

$
0
0

How often are we faced with this situation?  We are running various types of stress tests on a large environment.  All seems good.  All tests are working as planned.  All of a sudden DB freezes.  Nothing happens.  Bang!  For some mysterious reason our large DB has run out of log space and everything is stuck in log-suspend state.  No worth killing spids or dumping  tran manually.  It will freeze too.

 

Although some may argue that it never happens to good DBAs (are you nuts?  didn't you hear about trunc on checkpoint & abort tran options! - some decent monitoring during the tests?) - sometimes these things happen and unpleasantly surprise us when we expect it the least (the last two time I faced this nasty situation were when I discovered how bad triggers impact may be on xOLTP type DMLs even though they do nothing - or how lazy the checkpoint process may be bcp-ing in data in 1k batches into trunc on chkpt set DB with 12 GB log).  Whatever the reason - it is always the nasty feeling that now I will have to rebuilt all this s...t again and waste hours on recovery.

 

How can this situation be treated, though, in the most efficient way?

 

The best & fastest option would be - to increase the log device and let ASE take care of itself.

The worst and slowest option would be - to crash the data server and curse ASE unable to start up.

 

Unfortunately the best option is not always possible (for various reasons) and the worst option... Hey!  Is it really the worst?

 

Often to recover from the situation like this one goes through the lengthy process of rebuilding the whole DB from scripts (listonly = 'create_sql' may sometimes help) and reloading it from dumps to preserve DB layout.  So a DBA kills the server... Starts it in no recovery mode (TFxxx)... Drops database with dbcc dbrepair(xxx)... Rebuilds it... Loads it back...  Day's work goes down the sink.

 

There is a much quicker way, however, which is often overlooked.  To use the method above we sometime mark DB manually in sysdatabase as suspect (value 256).  But there is another more benign status which may help to speed up recovery in this situation.  If we know for sure that we will have to reload DB anyway - we may use the value 32 instead.  The status stands for:  Database created with for load option, or crashed while loading database, instructs recovery not to proceed.

 

That's precisely what one needs:

 

dbcc dbreboot(reboot_norecovery, "LOGSUSPENDB")

 

If this one works - you're lucky, may proceed to reload the DB.  If it does not - you'll have to crash ASE:

 

sp_configure "allow updates", 1

update sysdatabases set status = 32 where name = "LOGSUSPENDB"

shutdown with nowait

startserver -f MY_NAUGHTY_ASE --(LOGSUSPENDB will not be recovered on startup)

load database LOGSUSPENDB from ...

sp_configure "allow updates", 0

sp_dboption ... --reset the DB options

 

Unless someone may think of a better option to recover from this nasty situation I'd recommend the one above.

 

Love it or leave it - I leave it to you.

 

Have fun.

 

ATM.

 

ps.  Of course, good DBAs never get into this situation in the first place, but what is good DBA anyway?

Viewing all 151 articles
Browse latest View live


Latest Images

<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>