Quantcast
Channel: SAP Adaptive Server Enterprise (SAP ASE) for Custom Applications
Viewing all 151 articles
Browse latest View live

ASE 16 & Managing VLDB’s with VLH (Very Large Hardware)

$
0
0

VLDB’s….it seems when discussing these, everyone focuses on the data volume aspect and the operational concerns.  These, of course, are very valid points.  However, when working with VLDB’s, we also often are working with “Very Large Hardware” - machines with very high core counts - or at least very large amounts of memory …and this can lead to some not-so-entertaining pain points if the DBA’s are not aware of what this means - and can mean that the VLDB implementation suffers - even more than the challenges with respect to backup and recovery, creating indexes online, etc.

 

Last week I was at a customer site (I don't get to do this much any more - so it was a very enjoyable experience) and the lead DBA manager made a comment to me after looking at the results of some performance tests we were running.  To paraphrase, what he said was:

 

“In the old days, we focused on tuning physical IO’s - it looks like now we need to focus on spinlocks.  Back then, with low [available] memory, we did a lot of physical reads.  Today, on the larger memory footprints, spinlocks and memory contention is a bigger problem”

 

He made that observation after we had changed several different spinlock configuration values as well as changed some cache configurations and bindings to also help eliminate spinlock contention.  The system we were testing was 40 cores/80 threads and ~450+GB of memory in which all the current day’s data (which was being analyzed) easily fit into memory with no physical reads (the remaining days data, of course, on disk).  This is typical - how many of our systems do we proudly state have 95-98% cache hit ratios with only 1/10th of the memory of the full database size???  Probably most.

 

At this customer - like many others - a massively parallel operation ran at least periodically during the day (multiple times per hour I suspect).  The reason for massive parallelism was simple - the volume of data to be processed simply precluded a single threaded process.  I have seen this all too often - with healthcare claims and billing processes as well as FSI close market processes, etc. all  running high 100’s or 1000's of concurrent sessions all doing the same logical unit of work - but dividing the workload among the concurrent sessions.  Simply put, single threaded solutions don’t work in VLDB - and the application developers quickly resort to massive parallelism to try to get all the work done within the desired timeframe.  The quicker the better.

 

As was this customer’s case.  His problem was that as he ramped ASE 15.7 up beyond 24 engines, severe spinlock contention (>50%) resulted in exponentially increasing response times.  Fortunately, we were running ASE 15.7 sp110 which had monSpinlockActivity - which makes it much, much easier to diagnose spinlock issues.  If you have never used this table…..you are really missing something - it has become a key table for me - much like monOpenObjectActivity.

 

Net result was that a process that should have ran in 10-20 seconds wasn’t finishing in well over 5 minutes when running 64 cores and scaling the number of concurrent sessions....in fact it was taking so long they were simply aborting the test.

 

Ouch.

 

Unfortunately, we were not running ASE 16.

 

Why do I say that??  Because at least two of the spinlocks causing problems were the DES and IDES spinlocks.  However, the first spinlock that was causing contention reminded me why VLDB’s should be using DOL locking - datarows preferably.  That spinlock was the Address Lock spinlock.  Yes, this is configurable via “lock address spinlock ratio”….which mystifyingly is set to 100 by default.  Now, that may be fine for pubs2, but any decent sized database with a large number of tables and things get ugly quickly.  Specifically, this spinlock is used for indexes on APL tables.  Now, I hear quite often about the supposed penalties of DOL locking - with memcopy, etc. - but one trade-off that you need to consider when it comes to concurrency and contention:

 

  1. is it better to make it tunable to reduce the contention, or…
  2. is it better to avoid the contention to begin with by using local copies

 

AHA!  Never thought of that did you?  Yes, APL tables (in my mind) have a much bigger scalability issue in that you can’t avoid address locks.  Hmmmm….yeah, we can set “lock address spinlock ratio” to 5…but what exactly does that do??  Well a quick doc search reveals:

 

For Adaptive Servers running with multiple engines, theaddress lock spinlock ratiosets the number of rows in the internal address locks hash table that are protected by one spinlock.

Adaptive Server manages the acquiring and releasing of address locks using an internal hash table with 1031 rows (known as hash buckets). This table can use one or more spinlocks to serialize access between processes running on different engines.

Adaptive Server’s default value foraddress lock spinlock ratiois 100, which defines 11 spinlocks for the address locks hash table. The first 10 spinlocks protect 100 rows each, and the eleventh spinlock protects the remaining 31 rows. If you specify a value of 1031 or greater foraddress lock spinlock ratio, Adaptive Server uses only 1 spinlock for the entire table.

 

Don't ask me why the docs cite the last example when in reality we are always trying to LOWER this value vs. raise it.  Soooo…when we set it to 5, we get 1031/5=206.2 or 207 spinlocks (rounding up) - 206 guarding 5 rows each and 1 spinlock on the last hashtable row.  It helped….although occasionally, we still saw periods of 2% or 5% spinlock contention.  Could we set it to 1?  Theoretically.  Might use quite a few resources on all those spinlocks though.  However, remember, the spinlocks are on the hash buckets...the address locks are in the hash chain so even at 1 you still could have contention if the address locks you are after are in the same hash bucket.  BTW, DOL locked tables don’t use address locks on the indexes - so the whole problem is avoided - reason - address locks are part of the lock manager and the implementation for DOL avoids that part….I wonder if not due to the memcopy aspect or the index latching…hmmmm….  Getting back to the supposed penalty for DOL locking…when is a penalty not a penalty???

 

When the alternative has an even bigger side effect.

 

For other considerations on DOL locking, see my friend and colleague Cory Sane’s blog on DOL locking at http://scn.sap.com/docs/DOC-52752.

 

How much of a difference did this make???  Well, we dropped from a never-ending soon number at 64 engines to something less than 5 minutes.

 

Now, back to the DES and IDES.  Similar to address locks, these also can be tuned via “open object spinlock ratio”, “open index spinlock ratio” and “open index hash spinlock ratio”.  Of course, if using partitions, one should also consider “partition spinlock ratio”.  In true P&T characteristic, as soon as we eliminated the address lock spinlock problem, these guys who were hiding suddenly popped up at 30-40% spinlock contention.  Remember - 10% spinlock contention usually triggers a search and destroy mission ….sooo…even though 30-40% was less than the 50%, we are still suffering mightily.  Once again, we simply set the ratios to 5 …or was it 10….sigh…getting older…memory fail…

 

Anyhow, you can find a full description of what these things do in the white paper “Managing Workloads withASE: Techniques for OLTP Scaling…” …which admittedly is a bit dated as it was based on 15.5, but it has a very good description of ASE metadata and some of the components in a descriptor.  Of course, it was plagerized heavily from an even earlier work by David Wein who was very instrumental in the ASE kernel....but the advantage of being co-workers is plagerism is allowed as long as we give credit where credit is due.....

 

This is where ASE 16 shines.  The descriptors in memory are NOT used for query compilation or optimization.  More critically, the descriptors contain address locations for where indexes are in memory, etc. along with OAM and GAM pages - which makes finding them faster.  In addition, they also are used to track concurrency such as keep counts and performance metrics - an example of the latter is that the IDES is the location of the metrics that monOpenObjectActivity collects.  Now, in pre-ASE 16, any time any of these counters were modified, ASE grabbed the spinlock.  Which spinlock???  Oh, yeah - the one that not only protects the current object descriptor - but because of the open object spinlock ratio also protects 99 more.

 

Ouch.

 

If I had different users running concurrent queries against different objects that just soooo happened to be protected by the same spinlock…. welllllllll…. hmmm…. that is what was happening.  ASE 16 does this a lot better.  Instead of using spinlocks as a concurrency implementation, ASE 16 leverages several different lockless structures such as “Compare & Swap” that are now actually implemented in modern CPU’s.  In addition, some metadata cache optimizations, such as pre-caching systypes and other similar system table information helps reduce systables lookups…more on this in a minute.  For more on some of these changes, you maywant to watch the replay of Stefan Karlsson’s webcast  “Achieving Linear Performance Scalability in Large Multi-core Systems with SAP ASE 16” @ http://event.on24.com/r.htm?e=775357&s=1&k=223D966FF0E5C249A1FD8ACECA0B91F7.  I know he and the SPEED team also have a white paper on this coming out - I am reviewing it at the moment and when he comes back in August from vacation, he will likely shake his head at all my recommended changes....  So look for it in September maybe.

 

Back to our performance test.  We were now down to something like 3 minutes.  Yayyyyyyy!!!

 

But here is where we hit a harder nut to crack - data cache spinlock contention.

 

Yes, the cache was partitioned. But…..

 

So the first thing we did is a common trick - move the system tables to a separate named cache.  A lot of folks think (mistakenly so) that the metadata cache is a system table cache.  NOT.  Query parsing/normalization often involves looking up column names, datatypes, etc.  Query optimization often involves reading systabstats & sysstatistics.  Both are even slower if we have to read those tables from disk...and by default, they are in default data cache and treated just like any other table....which means that big wonking table scan just bumped them out of cache.

 

If you bind the system tables to a separate cache with a relaxed cache strategy and then look at monDataCache for LogicalReads or CacheSearches, you will get a very interesting picture of how many LRU -> MRU relinkages just the system tables were causing in default data cache.  In some runs, we were seeing a high of 100,000 over ~2 minutes execution time - which is ~1000/sec - or 1 every millisecond.  Not claiming they take a millisecond….but it
is a bit of work that can be saved and reduces cache contention on cache spinlocks.  Of course, binding systypes in ASE 16 might be less of a concern due to the aforementioned fact that now it is fully cached as part of the metadata cache…but....why not?

 

By this time, we were down to 1.5 minutes of execution at 64 engines.  A huge drop from the starting point.  We then noticed that one of the key tables was in a separate named cache, but the others were cluttered along with everything else in default data cache.  To prove a point that it was LRU->MRU relinkages that was helping to drive the contention, we moved the tables to another named cache with a relaxed cache strategy.  This dropped us down to 1 minute total
execution.  However, we still noticed 10%-15% cache spinlock contention even on the relaxed cache strategy.

 

While we are still not where we would like to be, it proved several important considerations for VLDB management:

 

  • DOL tables have an advantage over APL in reducing address lock spinlock contention - which can be severe in high engine count configurations.
  • pre-ASE 16 requires a lot more tuning around object and index descriptor spinlocks - which helps - whereas ASE 16’s lockless modifications avoid many of these issues
  • LRU->MRU relinkage plays a role in data cache contention - even when partitioned.  Using separate caches splits the load and using relaxed cache strategy where possible helps even more.  As the customer noted, perhaps more careful consideration for the sp_sysmon recommendations for using ‘relaxed cache strategy’ on named caches other than default data cache is deserved vs. ignoring it as in the past.

 

There still is some contention with this app keeping us from hitting the perfect state - but again, I think to reduce it further will be a SQL change from a slightly unoptimal query that is the root cause of the rest of the cache contention (yes - table scans DO cause other problem...not the problem here, but a 1 million page table scan means 1 million LRU -> MRU relinkages and the associated spinlock issues...sometimes ya just gotta fix the problem vs. tuning around it)....but it was an interesting segue into how some of the future changes such as transactional memory in a future release of ASE will help.

 

The lesson...when running VLDB's on VLH...you might want to consider upgrading to ASE 16 sooner rather than later....in the meanwhile, you might want to look at your memory tuning/named caches and spinlock ratios.  That "everything" in one big default data cache.....that is soooo 1990's.


HugePages for Sybase / SAP ASE databases on linux

$
0
0

Hello SYB DBAs,

 

 

you may be noticed a message in your DB log which looks like this:

"Could not allocate memory using Huge Pages. Allocated using regular pages. For better performance, reboot the server after configuring enough Huge Pages."

 

 

Don't worry about it, if no Huge Pages are available, the system automatically switch back to the normal page size of the OS. For example in Linux the regular memory page size is 4KB.

The huge page size you can find out via command:

 

cat /proc/meminfo | grep Hugepagesize
Hugepagesize:       2048 kB


Hugepages speed up the memory management.

Hugepages allocated in one piece and can't be paged out into the paging space.

So you can ensure that the DB memory never be paged out.

 

Please read also this notes about HugePages and Sybase/SAP ASE:

2021541 - SAP ASE runs slow due to table scans with regular page allocation

1805750 - SYB: Usage of huge pages on Linux Systems with Sybase ASE

 

For example if the sybase instance need 10GB without Hugepages:

 

10*1024*1024/4kb=2621440  (4kb Pages)
Every Page needs a 8byte entry in the page table 2621440 * 8byte = 20971520 => 20MB

For example if the sybase instance need 10GB with Hugepages:

 

10*1024*1024/2048kb=5120 (2mb Pages)
Every Page needs a 8byte entry in the page table 5120 * 8byte = 40960 => 40KB

 

This values need to be multiplicated with the user processes, but the shared memory must be considered!

 

If you need more information about the Page Table, TLB and Huge Pages you should have a look at Stefan Koehlers blog which is regarding oracle but however the facts are the same:

http://scn.sap.com/community/oracle/blog/2013/06/17/oracle-myths-and-common-misconceptions-about-transparent-huge-pages-for-oracle-databases-on-linux-uncovered

 

 

If you want to use HugePages follow this steps as root

 

1) calculate the needed memory

 

select cast(sc.name as varchar(30)) as name, (scc.value*2/1024) as MB
from sysconfigures sc, syscurconfigs scc
where sc.name like 'total%memory' and sc.config = scc.config
go
name                           MB
------------------------------ -----------
total logical memory                  6028
total physical memory                 6400

 

=> Round the ASE "max memory" (total physical memory) setting up to the next closest 256MB, divide by the Hugepagesize, and configure at least that many huge pages. "

=> 6400 / 256mb = 25 => OK

2) calculate the needed HugePages

When ASE uses huge pages, it will allocate memory to the nearest multiple of 256Mb so don’t configure the ASE exactly to the maximum number of huge pages, but leave a small amount unused (e.g. 300/400 Mb). In case a small increase in size is needed due to the change of a Sybase config you will not get into trouble.

6400+400=6800
6800*1024/2048=3400
vi /etc/sysctl.conf
vm.nr_hugepages=3400

 

3) activate the kernel settings, if enough memory is available

sysctl -p

 

 

if the memory is not free, restart the server or add memory

 

4) Allow Sybase ASE owner to make use of available HugePages

vi /etc/security/limits.conf<Sybase ASE OS owner>   soft memlock unlimited<Sybase ASE OS owner>   hard memlock unlimited

5) check the config

cat /proc/meminfo | grep Huge
HugePages_Total:    3400
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB

 

6) start the Sybase/SAP ASE and you will see this message in the logs

 

"Allocated memory using Huge pages."

 

 

 

 

7) after DB start

cat /proc/meminfo | grep Huge
HugePages_Total:    3250
HugePages_Free:     1655
HugePages_Rsvd:     1605
HugePages_Surp:        0
Hugepagesize:       2048 kB

 

If you get the same error as before extend the HugePages.

 

more offical info:

1805750 - SYB: Usage of huge pages on Linux Systems with Sybase ASE

 

Before and after process details

Here are the before and after effects regarding the dataserver (sybase DB process under unix) after a fresh restart:

Before:

pmap <sid>
START               SIZE     RSS     PSS   DIRTY    SWAP PERM MAPPING
0000000000400000  32736K  15460K  15460K      0K      0K r-xp /sybase/SMP/ASE-16_0/bin/dataserver
00000000025f7000   5692K   2664K   2664K    216K      0K rwxp /sybase/SMP/ASE-16_0/bin/dataserver
0000000002b86000   2488K   1196K   1196K   1196K      0K rwxp [heap]
0000000142df4000 6291456K 1507196K 1507196K 1507172K      0K rwxs /SYSVba156435
00007fffed33e000     20K     16K      2K      0K      0K r-xp /lib64/libnss_dns-2.11.3.so
[...]
Total:           6507456K 1530568K 1528772K 1509564K      0K 

 

 

ps auxw | head -1; ps auxw | grep <sid>
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
sybsmp   31464 15.5  9.2 6507456 1516852 ?     Ssl  08:56   0:24 /sybase/SMP/ASE-16_0/bin/dataserver -sSMP -d/sybase/SMP/sybsystem/master.dat -e/sybase/SMP/ASE-16_0/install/SMP.log -c/sybase/SMP/ASE-16_0/SMP.cfg -M/sybase/SMP/ASE-16_0 -i/sybase/SMP -N/sybase/SMP/ASE-16_0/sysam/SMP.properties

=> RSS = 1,5GB

=> VSZ = 6,5GB

 

After:

pmap <sid>
START               SIZE     RSS     PSS   DIRTY    SWAP PERM MAPPING
0000000000400000  32736K  16096K  16096K      0K      0K r-xp /sybase/SMP/ASE-16_0/bin/dataserver
00000000025f7000   5692K   2664K   2664K    216K      0K rwxp /sybase/SMP/ASE-16_0/bin/dataserver
0000000002b86000   2648K   1244K   1244K   1244K      0K rwxp [heap]
00002aaaaac00000 6291456K      0K      0K      0K      0K rwxs /SYSVba156435
00007fffed33e000     20K     16K      0K      0K      0K r-xp /lib64/libnss_dns-2.11.3.so
00007fffed343000   2044K      0K      0K      0K      0K ---p /lib64/libnss_dns-2.11.3.so
00007fffed542000      4K      4K      4K      4K      0K r-xp /lib64/libnss_dns-2.11.3.so
00007fffed543000      4K      4K      4K      4K      0K rwxp /lib64/libnss_dns-2.11.3.so
00007fffed544000     48K     20K      0K      0K      0K r-xp /lib64/libnss_files-2.11.3.so
00007fffed550000   2044K      0K      0K      0K      0K ---p /lib64/libnss_files-2.11.3.so
[...]
Total:           6507616K  24056K  22094K   2440K      0K 


 

vmsaplnx02:~ # ps auxw | head -1 ; ps auxw | grep <sid>
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
sybsmp    1716  9.6  0.1 281536 22712 ?        Ssl  09:12   0:20 /sybase/SMP/ASE-16_0/bin/dataserver -sSMP -d/sybase/SMP/sybsystem/master.dat -e/sybase/SMP/ASE-16_0/install/SMP.log -c/sybase/SMP/ASE-16_0/SMP.cfg -M/sybase/SMP/ASE-16_0 -i/sybase/SMP -N/sybase/SMP/ASE-16_0/sysam/SMP.properties

=> RSS = 23MB

=> VSZ = 282MB

 

If you compare the RSS/VSZ of the 2 ouput you will see the benefit.

 

 

Hope this howto helps you to use your memory ressources optimal.

 

Thanks for reading and sharing!

Please feel free to ask or get in contact directly if you need if you need assistance.

 

Best Regards,

Jens Gleichmann

Technology Consultant at Q-Partners Consulting und Management GmbH (www.qpcm.eu)

 

 

References

Virtual memory - Wikipedia, the free encyclopedia

Resident set size - Wikipedia, the free encyclopedia

[Oracle] Myths and common misconceptions about (transparent) huge pages for Oracle databases (on Linux) uncovered

2021541 - SAP ASE runs slow due to table scans with regular page allocation

1805750 - SYB: Usage of huge pages on Linux Systems with Sybase ASE

ASE 16: Data & Index Compression

$
0
0

Okay, so data compression isn’t new to ASE 16 - SAP first supported data compression in ASE 15.7.  Index compression is new in ASE 16, but perhaps it takes a bit of background in data compression as a bit of a primer before we can really understand the full considerations.

 

Data compression in ASE 15.7 was implemented at 3 basic levels:

 

  • Row compression - which tried to compress the storage of each row through basic compression techniques such as reducing duplicate characters, etc.
  • Page compression - which used the industry standard dictionary compression techniques to replace data on the page with tokens.
  • LOB compression - which used standard industry ZLib type compression on large LOB objects, which would not lend themselves well to page compression.

 

Before we get into index compression, let’s discuss page compression a bit more because it is the style used in index compression and there is a lot of FUD about it.  First of all, dictionary compression is extremely common in analytic systems.  As noted, SAP ASE simply looks at each column value, determines if there is already an encoded dictionary token, and if so, simply replaces that data element in the storage row with the encoded token.  If the value is not in the dictionary, a new token is assigned to the new value and then it is replaced in the row.

 

In the DBMS industry, there is a lot of noise about page vs. table level dictionary compression and who has better compression, etc.  Some vendors argue that table level dictionaries are more effective as you don’t have to store the dictionary of encoded values on every page.  This nets them slightly better compression ratios.  However, what they don’t explain is the trade-offs - such as the fact that page level compression is more likely to resolve any data value to a low cardinality value which can be encoded using a single byte (or less) whereas table level is more likely to consider data elements as high cardinality and need more bytes for encoding.  Consider for example, the data element “city”.  In a typical database page, there likely will be at most 10’s to low 100’s of distinct values - easily represented in a single byte.  However, in the full table, there could be 10’s of thousands, needing 2 bytes for encoding - possibly 3.

 

The other consideration is that the dictionary table has to be maintained.  This can be a source of potential bottleneck.  For example, at a table level, multiple concurrent writers would need access to the dictionary table and likely would need to be sequenced.  A similar problem pops up with other compression techniques in which adjacent rows are included and the same set of values for a set of columns is replaced with a single data encoded value for the multiple rows.  Yes, it improves compression.  But at an interesting cost for subsequent DML operations that need to modify one of those values as now that single value for multiple rows needs to be replaced with an encoded value for each row…a bit of an escalation from attempting to modify a single row and ending up having to modify several others.

 

As a consequence, ASE’s page compression is a trade-off in which maximum compression is not the goal, but rather a balance between data compression and user concurrency.  One aspect to keep in mind is that data is compressed both on disk as well as in-memory.  This has benefits as well as trade-offs.  For example, one benefit is that it is likely that more data will fit into memory, thus reducing the number of physical reads.  However, it also means that logical reads for previously cached data will be slow as each logical read will have to re-uncompress the data.

 

That last point is a bit of a concern as the overhead can be considerable.  While SAP is looking at ways to reduce this overhead, any reduction would come at a cost of increased memory consumption - which could lead to other problems.  As a result, the most effective solution is to minimize the amount of logical reads that any query needs to perform.

 

 

I will put it in plain English:  You need better indexing.

 

 

But doesn’t adding indexes consume even more space??  …the very thing we are trying to reduce???

 

Yes, but, let’s face it - any frequently executed query (other than an aggregate) that does 100’s or 1000’s of LIO’s - even if all cached/in memory - per row returned is a good candidate for better indexing.    Any penalty from compression is a side issue at that point and just serves to magnify an already bad situation.  For example, on an early SAP system migration, we found a query that ran every 2 seconds that did a table scan of 200,000 pages in ASE.  Yep, completely in memory.  Yep, noticeably slower than when not compressed.  But was compression the real issue??  Nope - the real issue was the in-memory tablescan of 200,000 pages that with a proper index might have been only 10’s of pages of LIO.  Adding the index to  support the query would not only fix the penalty of compression - but also run orders of magnitude faster than it did previously.

 

Now, that doesn’t mean you will be adding an index or 10 to every table.  However, it does mean that you will need to carefully monitor for queries that are running slower and then consider adding better indexing.  Probably it will mean 1-2 additional indexes on a few tables.  Do the 1-2 additional indexes add extra overhead for DML operations?  Yes.  But not enough to be really measurable, unless you don’t have any indexes to begin with.  However, of course, it will add space - as well as perhaps extra maintenance time with respect to dbcc’s, reorgs, updatendex statistics and other operations.

 

But let’s talk about space for a minute.  First of all, this discussion will focus on DOL tables.  Yeah, I know, there are tons of legacy systems out there still using APL tables.  Sorry.  Too many variances with that to get into.

 

Now, then DOL indexes from the very beginning did two really interesting tricks to try to reduce space consumption:

 

  • Index suffix compression on intermediate nodes - Intermediate nodes only contained enough of the index prefix to be able todetermine B-tree traversal
  • Non-unique index duplicate RID lists.  In a non-unique index, at the leaf level, if a key value had multiple rows associated with it, rather than storing the key value(s) multiple times (once for each leaf row), the key value(s) would be only stored once with a list of RIDs that it pointed to.

 

While this reduced the unnecessary space consumption, the more indexes you have and the more distinct the keys, the more space is necessary to store them.

 

Now, then, along comes ASE 16 and its support for index compression.  ASE 16 supports index compression for leaf pages only (non-leaf intermediate nodes still have suffix compression) and uses a technique of index prefixcompression.  The simple solution would be to compress each index key column individually as a whole unit - and this would work well for multi-key indexes.  For example, in a 4 column index key, the first 3 are likely highly repetitive within any given index leaf page.   However, this might not reduce space at all for many single column indexes - or for multi-key indexes in which the last index key had lengthy prefixes.   For example, an index on city name - with the names sorted, often the first multiple characters of the index key values are identical - e.g. Jackson (as in Jackson, MS) and Jacksonville (as in FL).  This doesn’t only affect character data - consider datetime columns in which the date values - and perhaps the hours, minutes and possibly even seconds are identical with the only difference in milliseconds.  Of course, the datetime isn’t stored as character - it is a structure of two integers - a date component and time component.  But likely the date component and the first multiple bytes of the time component may be identical and could be compressed.

 

There are a couple of problems with that….consider a numbering sequence with numbers 100001, 100002, 100003, 100004, … and so forth.  It would appear that looking at a prefix of a partial column, I might be able to compress out the first 5 digits.  Not so fast.  Remember, we are looking at the numeric representation.  What is stored is the binary representation.  On big endian systems (e.g. IBM Power series), this still works out as 100001=0x000186a1, 100002=0x000186a2, 100003=0x000186a3, 100004=0x000186a4,….    However, on little endian systems (such as Intel x86), the LSB ordering of the bytes results instead in 100001=0xa1860100, 100002=0xa2860100, 100003=0xa3860100, 100004=0xa4860100,….  Ugh!  This also impacts datatypes that are derived from numeric types as ’04 Aug 2014’ = 0x7da30000 and ’05 Aug 2014’ = 0x7ea30000 on x86 LSB platforms where IBM Power and other MSB chips would store them as 0x0000a37d and 0x0000a37e respectively - which lends itself much better to prefix compression.

 

The second problem is that most DBMS’s internally store data in fixed length columns first internal structure and then variable length columns.  This reduces the amount of overhead for each row and thus saves some space.  For example, when the data is stored in column order and there are variable length columns, each row would need a column offset table specifying where each column began.  By storing the fixed length columns first, the table structure can simply have single offset table that is used for every row for the fixed length columns and a much smaller column offset table for the variable length columns.  Consider for example an index on columns {col1, col2, col3, col4} in which columns col1, col2 and col4 are fixed length.  If the internal structure was {col1, col2, col4, col3}, then I would only need a column offset table for a single column. This is fixed length first column reorganization internally is why some DBMS’s often suggestion specifying fixed length columns first for a table anyhow.

 

This impacts prefix compression as I would need to make sure that the columns are re-ordered into the correct column sequence as specified in the index key.  Unfortunately, this means that now the larger column offset table takes up more space in the index row than it previously did - which means the index format for index prefix compression could be even larger than it is today.  However, once compression kicks in, the prefix compression is extremely efficient at reducing index size as the typical leading columns of a multi-key index are identical and can be replaced with a single token.

 

A second aspect is that not all indexes may compress very well.  Consider a single column index on a monotonically increasing key - such as an identity column, order number, trade dates, etc.  If the index is unique (e.g. an identity or pseudo-key column), then the prefix might not be very repetitious at all - especially on x86 platforms in which the leading bytes are byte swapped to the end.  If we then arbitrarily apply page compression, each distinctive value would have an entry in the dictionary, plus the token - which means that that we would still need the same amount of space for the dictionary values - but we would also have double the space for the tokens.

 

This leads to the following observations about index compression:

 

  • Multi-column indexes will likely have much greater compression ratios than single column indexes.
  • Indexes with distinctive leading columns (e.g. timestamp) may not compress well.
  • Single column indexes on monotonic increasing values (e.g. order number) may not compress well - especially on little endian (LSB) platforms such as x86.
  • Big endian (MSB) platforms such as IBM Power series might have better compression ratios.

 

With those kind of considerations, how do you decide whether to compress a table and/or its indexes or not??   This is where SAP Control Center 3.3 tries to help with the new Compression Advisor feature - which is pretty neat.  What it does is:

 

  • Allow you to specify the data, LOB and index compression you wish
  • Override the compression on individual columns or indexes as desired
  • Specify a sampling size (default 10%)

 

Then it makes a copy of the table structure, loads it with the sample size data volume, then applies the compression attributes you’ve selected and finally runs a reorg rebuild to compress the existing data in the table.  The reason I like this wizard is that it does two things:

 

  • Produces a fairly reliable compression ratio for space savings
  • Gives a good indication of how long the actual compression will take

 

The first version simply reported an overall space savings and compression ratio for the entire table.   As noted, this might include indexes that shouldn’t be compressed - or rather might not compress well.  There is an enhancement coming in the near future that I have been playing with that actually breaks down the compression by index and provides a result similar to:

 

compression advisor.jpg

 

If you want to play around with index compression, you can download the developer’s edition of ASE 16 from http://scn.sap.com/community/developer-center/oltp-db.  In addition, there was a webcast about compression and other features that you can watch the replay of at http://event.on24.com/r.htm?e=775360&s=1&k=D1FE610794ECF6A138793A50967D48D4.  Finally, there is a demo of the SCC Compression Advisor at https://www.youtube.com/watch?v=XiQW_Xl6bVU&feature=youtu.be.

Which wash buffer needs tuning?

$
0
0

Greetings fellow DBAs,

 

I am trying to reduce the 'Buffers Washed Dirty' for my system's default data cache  (as seen in sp_sysmon: Data Cache Management->cache: Default Data Cache->Buffer Wash Behavior section). 

 

I have varied the wash sizes in my 2K, 4K, and 16K pools with inconsistent results.  Is there a way to determine which pool requires a bigger wash setting?

 

Thanks!

Doug

Msg 3121 on restore, created with dbisql client

$
0
0

I'm running on ASE 15.5 ESD 5[.2, etc] Linux RHEL 6, 64bit, and had an amusing experience with restore you might want to know about.

 

I restored a database I have been building on another server, and got several instances of this message:

 

Recovery of database 'rt_test' will undo incomplete nested top actions

Msg 3121, Level 16, State 1:

Index Broker_BrokerCategory.pk_Broker_BrokerCategory (objid=44524161,indid=2) may be invalid. It uses CHAR or VARCAHR columns in its key and was created under a different sort order ID (0)  or charset ID (0) that the ones on this server (SortOrd=50,CharSet=1)

 

Strangely, both the test server above and the production server where this database was created have the SortOrd 50 and CharSet 1 as their defaults.  On the production server, I dropped and recreated the primary key constraints (in the production copy) using isql, and now dumps of the production database and loads on the test server no longer exhibit this error.

 

I think what I originally did was use dbisql to issue the commands to create the primary key constraints on these tables - the tables were empty at the time the constraints were added.  I strongly suspect that the dbisql/jdbc session produced a different result than the same commands issued using isql. 

 

You have been warned  :-)

UKSUG hosts Technical Data Management Event in London – November 20th, 2014

$
0
0

If you are a data management professional working with SAP’s Database and Technology products in the UK or Europe don’t miss TechSelect 2014 on November 20th in London. Hosted by UKSUG, this full day event will include technical sessions on SAP ASE, SAP IQ, SAP Replication Server and SAP HANA. The event will kick-off with a keynote by Irfan Khan, Chief Technology Officer of SAP's Global Customer operations, followed by a presentation by Peter Thawley, VP, SAP Database & Technology, Office of the CTO who will discuss the key architectural directions of the SAP Data Management Platform.  After the keynote presentations there will be sessions on the latest product innovations and roadmaps.

 

  

 

For more information and to register: http://uksug.com/events/techselect-2014

 

UKSUG TechSelect 2014
November 20th
The Hotel Russell
London

SYB: Update statistics - function datachange bug or feature?

$
0
0

Hi ASE specialists,

 

once again I'm facing an interesting issue on Sybase / SAP ASE.

May be you know how important it is to have updated statistics in your DB. The newer the statistics, the better for the execution plan (in normal case).

If you have configured your ATM (Automatic Table Maintenance) in dbacockpit correct, you should have good statistics if all is running fine (e.g. job scheduler).

Wrong or right?

SYB_GET_DATACHANGE

Let's test it with function module SYB_GET_DATACHANGE. Here you can specify the table and get back the data change ratio (%) of the table and table partitions. You also can check this manually with the "datachange" function (select datachange(object_name, partition_name, colname)), but the ABAP part uses exactly the same.

May be it is a little bit more comfortable to execute it in ABAP

 

So currently we have a data change ratio of 62% but no partitions have changes.

OK however, we will collect statistics. Therefore we use the "update all statistics" command. Why updated all statistics? To avoid missing statistic on partitions, indexes and columns.

 

description in sybase documentation:

"update all statistics updates all the statistics information for a given table. Adaptive Server keeps statistics about the distribution of pages within a table, and uses these statistics when considering whether or not to use a parallel scan in query processing on partitioned tables, and which index (es) to use in query processing. The optimization of your queries depends on the accuracy of the stored statistics."

 

 

select datachange('SAPSR3./BIC/FZSDHUC03',NULL,NULL)
go
---------------------------                   62.873411
(1 row affected)
update all statistics "SAPSR3./BIC/FZSDHUC03"
go
sp_flushstats
go
(return status = 0)
select datachange('SAPSR3./BIC/FZSDHUC03',NULL,NULL)
go
---------------------------                   62.873411

 

 

sp_flushstats is used to flush the counters to disk. This will be also done by the housekeeper. It is not necessary to do this here manually.

 

You should never use datachange without specifing a column, because the result will be aggregated across all columns, but SAP is doing it (SYB_GET_DATACHANGE), so...

 

Normally the data change ratio should be nearly 0. No changes happened to the table, so is it a bug?

 

To get more details you can use the tool optdiag.

Here the syntax:

optdiag statistics <SID>..<table> -Usapsa -Ppasswd -X -o output.out

 

The output showed me that not all columns were updated with the "update all statistics" command, but also after manually updating this columns the result of the datachange output is the same.

So it seems to be a bug. OSS message is created, but currently no answer to it. The used DB version was ASE 15.7 SP122.

 

##########

Update

##########

The development team released a note within 2 days with a correction on SYB_UPDATE_STATS:

2079837 - SYB: Avoid redundant statistics update due to CR 770415

 

It seems to be connected with the hashing in some releases, if we have a look into the coding:

"to avoid running into CR 770415, we need to enforce no hashing for partitioned tables on older releases

[...]

if partcnt > 1 and ( dbrel < '15.7.0.132' or ( dbrel+0(4) = '16.0' and dbrel < '16.0.01.00' ) ).

[...]

Please implement this note when your AE release is lower than 15.7 SP132 and on ASE 16 lower than SP1.

 

This will solve this issue!

 

##########

Update End

##########

 

I hope could help you to understand the statistics.

 

Best Regards,

Jens Gleichmann

 

Details:

update all statistics

update index statistics

update statistics

update table statistics

optiag

datachange function

For all the Sybase professionals in Aus & NZ region

$
0
0

RDBMS, column-based database, modelling & metadata…

 

  • Are you an SAP Sybase technical professional in Australia or New Zealand?
  • Would you like to learn the latest information and catch up with your network?

 

Have you heard that SAP has a multi-stream technical training conference coming up in which includes Sybase data management & modelling?

 

Taking place in Sydney in November, products to be discussed from a technical perspective include SAP Sybase:

 

- Adaptive Server Enterprise (ASE)
- Replication Server
- IQ
- PowerDesigner &
- Event Stream Processor

 

Other sessions at the 2-day event, SAP Architect and Developer Summit, include SAP HANA Development, UI5 Development & Business Intelligence.

With a total of 39 sessions from 27 international & local speakers, there is a lot to learn at this event, created expressly for developers, engineers and architects.

 

Stream C is where you’ll find the following presentations:

 

The Future for SAP Adaptive Server Enterprise - 16 and Beyond
by Richard Pledererer

 

SAP IQ - The Key Ingredient in your Big Data Analytics Strategy
by Chase Hacker

 

SAP Exodus– automatically migrating custom applications to the SAP Data Management Platform
by Rob Verschoor

 

SAP Replication Server - 2014 and Beyond
Modelling & Architecture with SAP PowerDesigner
SAP Event Stream Processor


all by
Rudi Leibbrandt

 

You can find the blog by Thomas Jung, one of the speakers coming to our shores for the event, here.


Event Details:

SAP Architect and Developer Summit

November 20-21, 2014

Australian Technology Park, Sydney

Cost: AUD 695.00


Please let me know if I can help with any queries.

Catch up with you there!

twitter-imge-developed-from-signature-440x220.jpg


Fujitsu and SAP update the Fujitsu Power Appliance for SAP Adaptive Sever Enterprise

$
0
0

Fujitsu and SAP have announced an updated version of the Fujitsu Power Appliance for SAP Adaptive Sever Enterprise. The appliances are now available in 4 different sizes (Entry, Midsize, Enterprise and DataCenter), Netapp has been added as a storage option to the DataCenter storage configuration and SUSE as an alternative operating system.

 

The appliance is currently based on a BYOL model for SAP ASE licenses, and shipped with a pre-configured 90 day SAP ASE trial license, below is a summary of the 4 configuration options that are available :-

 

Untitled.png

For more details of the appliance please refer to the following link : http://solutions.us.fujitsu.com/ASE


Quick access to ASE information to deal with your customers

$
0
0

Have you ever wanted to quickly find information on ASE? This day has come.

 

This “Assets Snapshot” gives access to the information you need to deal more effectively with your customers.

 

You find several assets relevant for customer facing meetings (presentation, video, sales guide, value proposition, roadmap, customer stories...), with deep links to the PartnerEdge Portal or YouTube.

 

This ASE Asset Snapshot is uploaded on the PartnerEdge portal.

 

Try it!

SAP Adaptive Server Enterprise (ASE) on Amazon Web Services (AWS)

$
0
0

SAP has put Adaptive Server Enterprise (ASE) in the cloud. You can now use Adaptive Server Enterprise (ASE) 15.7 Enterprise Edition Windows and Linux on Amazon Web Services (AWS) for Bring Your Own License (BYOL) model. SAP has created ASE Amazon Machine Images (AMIs). Users can launch an ASE instance from the AMIs on AWS. The BYOL model allows users to use their ASE licenses on AWS as a deployment option.

 

Please go to Amazon marketplace (https://aws.amazon.com/marketplace) to look for ASE AMIs or click on the following links directly:

 

Windows: https://aws.amazon.com/marketplace/pp/B00PG79OWM

Linux: https://aws.amazon.com/marketplace/pp/B00PG7GZJM

 

To use this, please sign up for an Amazon AWS account and ensure you have the ASE Enterprise Edition CPU licenses from SAP. Follow the instructions of the guide (https://s3.amazonaws.com/awsmp-ReleaseNotes/SAP_ASE_BYOL_AWS_HOWTO_GUIDE_2.pdf) and create a VPC (Virtual Private Cloud) along with ENI (Elastic Network Interfaces) to get a fixed host. Then generate your license key on SAP Service Marketplace with the MAC address of the host. Log on to your host and copy the license key over to activate your ASE server.

 

Instance types of m3.large and above are supported except GPU G2 Double Extra Large and Cluster GPU Quadruple Extra Large.

 

For more details on the platforms/features supported, please visit the following page:

2086750 - What versions of ASE are supported on Amazon Web Services (AWS) infrastructure?

 

For technical support, you can contact SAP support and as always you can log a case via the existing vehicle with SAP support. Also, don’t forget that you have forums like SAP Community Network (SCN) where you can post your questions and get answers from other customers or from in-house experts.











Call for participation in the SAP ASE Beta Program - Participate and help shape the future of SAP ASE

$
0
0


2015 will bring exciting new innovations to SAP Adaptive Server Enterprise.  We are seeking customers and partners interested in testing the planned new functionality as part of a beta program.  Specifically, the program will focus on testing functionality in the following areas:


  • In-memory processing  providing the following capabilities
    • Compiled queries
    • Use of SSD for better performance
    • Improve performance using transactional memory
  • Workload Analyzer enabling capture and replay of database transactions 
  • Data Store Access Management enabling optimal placement of partitions on appropriate storage
  • HADR with automated monitoring and failover for ASE 


Description and requirements of the beta program:

 

  • To participate in the program, you will need to sign a legal agreement with SAP
  • As a beta participant, you can select one or more of the areas outlined above.
  • You will receive a copy of the beta software and the documentation
  • You will need to generate a testplan to test the features of your interest
  • Participate in the calls with SAP engineering and support to discuss your feedback and resolve issues.


By participating in the beta program, you can actively help shape the future of the product releases as well as become aware of product features that may help with your future development and implementation plans.


If you are interested in participating in the beta program or have questions regarding the program,

please contact Vinod at vinod.chandran@sap.com or call 925-236-6419.

Announcing ISUG-TECH 2015 Conference in Atlanta – Don’t Miss It!

$
0
0

We’re excited to announce that we have teamed up with ISUG-TECH once again to host the ISUG-TECH Conference in Atlanta, GA on March 29th– April 2ndOver 200 SAP data managers and developers attended last year’s event to get in-depth, technical information on the SAP Data Management portfolio of products, including SAP ASE, SAP HANA, SAP IQ, SAP Replication Server, SAP PowerDesigner, SAP PowerBuilder, and SAP SQL Anywhere. This year’s event raises the bar with more sessions, additional networking opportunities, and keynote and plenary sessions that explore the future of the data management.

 

What’s more you’ll get a first look at the next release of SAP ASE. We’ll cover new features and functions for workload analysis and optimization, in-memory
processing, enhancements to Cluster Edition, Cloud deployment support, and much more.  This is a technical event – a who’s who of SAP ASE experts will be on hand to answer your questions and share their knowledge, so join us at the conference and get the information you need to run your systems.

 

Check out the detailed agenda and register today at:

  http://my.isug.com/techwave2015

 

 

 

What is Adaptive Server Platform Edition?

$
0
0

In 2015, SAP introduced Adaptive Server Platform Edition which consists of SAP ASE, SAP IQ, and SAP Replication ServerASPE is a package of database related technologies found in typical customer applications and provides essential services for OLTP, Reporting and Analysis, Availability, as well as Disaster and Recovery.  With Adaptive Server Platform Edition, customers are able to deploy the ASPE licenses as ASE, IQ or Replication Server licenses giving customers the advantage of having an adaptive server platform. 

 

The value of Adaptive Server Platform Edition can be described in three areas.

 

  • Deployment Flexibility - IT departments have the flexibility to repurpose licenses across projects or switch the license mix of those products at any time.
  • Business Agility - Businesses have much better agility so they can easily resize or re-architect environments in response to market changes, seasonal peak periods, or new business demands.
  • Better TCO - ASPE includes several popular ASE options and IQ options in the basic functionality.  Some of the options now included as standard features of ASPE are encryption, partitioning, compression, and LDAP support leading to even better TCO than before. Customers can also decrease license costs for future projects by leveraging the flexibility ASPE provides.

 

The ASE and IQ products in Adaptive Server Platform Edition and options are the same as before and contain no engineering changes.  Product documentation & roadmaps remain the same.  ASPE represents a simplified pricing strategy for ASE, Replication Server and IQ, with built in flexibility to move licenses between the components of the package.

 

The latest version of SAP Replication Server, packaged as Replication Server, Premium Edition (included in ASPE) supports ASE, HANA, IQ, SAP Data Services, and Hadoop.  SAP Replication Server supports multiple data replication uses cases, such as consolidating data from different sources,
distributing data amongst heterogeneous database types, as well as the common use case of providing disaster recovery features to ASE customers.
Customers easily replicate committed transactions from ASE or HANA to any SAP DB or into our ETL engine.  A licensable option for replication to/from 3rd party databases (Oracle, MSFT, IBM) is also available.  The new Hadoop target enables customers to replicate data directly from the database to their Hadoop lake – providing a new level of extensibility to ASE based applications.

 

SAP customers can convert their ASE, IQ, and Rep Server licenses to Adaptive Server Platform Edition licenses by contacting their SAP sales representative.

Coming soon to a cluster near you.....rolling upgrades!!!!

$
0
0

ASE Cluster Edition now supports rolling upgrades.  

 

An oft requested feature from customers considering ASE Cluster Edition is support for rolling upgrades.  While this was originally planned for the ASE 16 Cluster Edition release planned for later this year, the good news is that support for this feature has been expedited and now available in the in-market version – starting with ASE 15.7 CE sp133 as is currently planned (as usual, all caveats apply about plans changing - but the key is it will be fairly soon).

 

What does this mean?  In the future, when patches are released for ASE Cluster Edition, the patch documentation will state whether the patch is certified for a rolling upgrade or not.  If the patch is certified for a rolling upgrade, the DBA can apply the patch without shutting down the cluster.  One requirement, of course, is that the cluster nodes must be using a private install of the ASE binaries vs. using a single shared cluster file system implementation.  With this support, there are three methods of minimizing or eliminating downtime entirely for upgrading ASE cluster edition

 

Rolling Upgrades

 

If the patch is certified for a rolling upgrade, the DBA can apply the patch in a rolling fashion with zero downtime by using the following steps:

 

• Use the workload manager to failover/migrate logical clusters off the node to be patched
• Once all workload has been migrated off the node, shut it down
• Patch the local binary copy for that node
• Restart that node of the cluster - at this point it should rejoin the cluster
• Failback/migrate workload back onto the node using the workload manager
• Repeat for each node in the cluster

 

From an interesting point of reference, the lead engineer on this project did a review of patches for earlier releases and noted that most of the patches would have been certifiable for rolling upgrades. This led to the decision to expedite releasing this capability ahead of plan.

 

Minimal Down-time Upgrades

 

This capability sort of has been implicitly always been available and should have been used as a best practice.  First of all, to understand what is gained from this method, you must first understand the full downtime length for a normal upgrade:

 

• Users are kicked off the system
• The DBMS is shut down
• The software binary is applied (takes some 10’s of minutes or longer)
• The DBMS is restarted (can take multiple minutes)
• Application access is restored

 

The question is whether this can be reduced when using clustered DBMS implementations when the patch is not certified for rolling upgrades.  The answer, of course, is “yes” - by following a best practice that some term “minimal down-time upgrades”.

 

In this strategy, the nodes of the clusters are thought of as belonging to one of at least two sets.  The first set of nodes will be those upgraded while the second will be the nodes that provide services while the first set are patched.  For example, in a 4-node cluster, you might consider 2 nodes in each set.  For a 3 node cluster, perhaps 2 for the first set and 1 for the second.  The process then is as follows:

 

• Use the workload manager to fail/migrate all workloads to the second set of nodes
• Shutdown the first set of nodes
• Patch the first set of nodes
• Shutdown the second set of nodes
• Restart the first set of nodes - check/verify the LC’s are all pointing to these nodes
• Restart the applications
• Patch the second set of nodes
• Restart the second set of nodes
• Re-distribute the workload using the workload manager as desired

 

You might have noticed that there is downtime in the middle - between shutting down the second set of nodes are restarting the first set.  However, this should be just the time it takes to start the cluster nodes and not the 10’s of minutes that would also be necessary if patching in between - hence, that is what this method is sometimes referred to as the “minimal down-time” approach.

 

Also, you need to be careful in defining the sets of nodes to take down at once.  If a logical cluster doesn’t span both sets of nodes with primary and failover nodes, then depending on the down routing mode whether applications associated with those logical clusters will be available or not.  This could be exploitable in situations where some applications need higher availability than others - non-critical applications would be down for the full upgrade, while others would only be unavailable for the restart of the first upgraded nodes.

 

Major Upgrades/Avoiding Down-time

 

One of the bigger risks to system availability is when applying major upgrades to the DBMS.  While this doesn’t happen as often as patches, most of the time such upgrades affect the system catalogs, the cluster interconnect protocols or other DBMS internals that prevents a rolling upgrade.  While the minimal down-time approach above could still be used, many businesses want even better application availability to include:

 

• Ability to avoid down-time to the maximum amount possible - even the restarts time as that could be 10’s of minutes depending on memory size, tempdb size(s), database recovery times, etc.
• Ability to run different major releases for a period of time to allow rolling back the upgrade if significant problems are experienced post upgrade

 

Across the industry, there is only one solution for this - replicating the data to another cluster (or non-clustered) system.  This can be done by physical replication of log records or logical replication using SAP Replication Server,.  Generally, using logical replication is the most tenable solution as this allows log records to be sent in either direction.  For example, some customers like to perform the upgrade, flip to the upgraded system and run for a period of time (e.g.2 weeks) and then flip back to the un-upgraded environment and run there for an equal time frame as a second affirmation before flipping finally back to the upgraded system and upgrading the replicated copy.

 

Using a replicated copy, the only outage to the application is during the switch itself - which can be made transparent to the end user via middle tier components.  The degree of transparency may differ depending on the component’s ability to understand database connection contexts, etc.  The most simplest form is hardware switches which causes the application to get a connection drop message which then the app should be able to attempt a reconnect and if successful (which likely would be) then resubmits any current in-flight transactions.

 

One consideration that SAP is looking into is melding the upcoming ASE HADR technology with Cluster Edition.  ASE HADR technology (planned to be released in the upcoming ASE 16 sp02 release.....but as said earlier about 'planned'....) allows fully independent ASE installations to be viewed as a HADR cluster with full transparent client application failover and other capabilities that in the past were only available with ASE/HA or ASE Cluster Edition.  However, such an implementation is a long term future consideration at this point.

 

Summary

 

In summary, with support for rolling upgrades, ASE Cluster Edition is now an even higher availability solution than previously - and ASE CE matches the capabilities of competitive cluster solutions with respect to online upgrades.


New: SAP ASE Edge Edition for Small and Medium Businesses

$
0
0


SAP has improved upon SAP ASE Small Business Edition (ASE SBE) with the release of SAP ASE Edge Edition providing small and medium sized businesses with the same enterprise-level features that are found in SAP ASE Enterprise EditionSAP ASE Edge Edition can run on physical or virtual machines with 4 cores or less.  Machines are not limited to 2 chips as they were for SAP ASE SBE.  This gives small businesses the ability to leverage the benefits of virtualization on today's more powerful, multi-core platforms.

 

SAP ASE Edge Edition replaces SAP ASE SBE and includes many options that were not available to SAPASE SBE customers.  Every license includes:

  • Security and Directory Services which provides SSL, LDAP authentication, row-level security, and more to provide the highest levels of security and protection from un-authorized access
  • Encryption of sensitive information such as credit card numbers or SSNs that can only be decrypted by authorized users
  • Intelligent Partitioning of data based on its content, which improves performance, shortens maintenance times, and simplifies operations for aging data
  • Compression which can lower storage costs and improve I/o performance
  • Warm standby replication of ASE Edge data to protect businesses from data loss and accessibility in the event of a system failure

 

SAP ASE Edge Edition provides small and medium sized businesses with enterprise-grade features for enhanced performance, advanced security, and data availability in the event of hardware failure.  It can be purchased through SAP partners at a low price point that will fit the budget of small businesses.

Digest of Recently Published ASE KBAs for Week of 1 Feb 2015

$
0
0

Purpose

 

The purpose of this blog is to promote the SAP KBAs published or modified and republished for SAP Sybase ASE in the week of February 1st, 2015

You will need to be logged into Service Marketplace in order to view the contents of these KBAs, .

 

[ Due to a reporting problem, the titles are somewhat truncated.  I will update them once the problem is cleaned up.  -bret ]

 

KBATitle

1872688 

Changes to the versioning format in SAP Sybase ASE 15.7.
1922006When will ASE 15.5  and 15.7 be end-of-life?
1952370ASE: error 814 State: 7 negative keep count while running update statistics.
2000310ASE: Alter database  command is taking a long time to complete.
2074220Error 325 - Adaptive Server finds no legal query plan for this statement.
2074283Msg 18204 State 1: Procedure 'sp_add_resource_limit', Line 206: Unknown limit type 'idle_time'.
2080956How to Build the DBCCDB Database for DBCC CHECKSTORAGE.
2081577Error 693 is returned when running 'create index' and 'dump database' simultaneously.
2084785A process is infected with signal 11 in close_range__fdpr().
2089677How to configure syb_default_pool on SPARC T4 or T5 system.
2091031Targeted CR List for ASE 15.7 SP132.
2105799ASE 64bit for Windows may fail to configure a data cache or fail to boot with a large memory configuration.
2120932ASE Infected with signal 11 at ssql_unlink_stmtmetrics
2122114Identity start value jumps after load of database or transaction log dump
2123826Errors 3935, 9591, and 515 are returned when running an INSERT that references the reserve_identity() function.
2124412ASE optimizer may  compile a stored procedure twice when "procedure deferred compilation" is OFF.
2124804ASE patch installer reports 'File exists on this system and is newer than the file being installed'.
2125004The master database transaction log grows every time sp_helpsort is executed.
2126553Under UTF8, sp_spaceusage  raises error 19537 "Invalid syntax or illegal use of the USING clause"
2127755

ASE - Error 539 Unexpected internal access methods error 0, state 31 while executing an UPDATE statement.

How to search the old Sybase Infobase "Solved Cases" knowledgebase

$
0
0

The old Sybase Infobase knowledgebase ("Solved Cases") collection is now searchable through the SAP Sybase support portal.

  1. Go to http://support.sap.com , find the "Support Portal" title in the upper left corner and click on the little black triangle to pull down a list of portals, and select the Sybase portal.

    sybportal.png

    (Alternatively, go directly to https://websmp107.sap-ag.de/sybase/support )

 

2.  Log in.


3.  Click on "KBAs and Solved cases"


4. Expand the "show advanced search options" widget

 

sybadvsrch.png

 

5. Verify that the "Solved Cases" checkbox is checked.  (You may want to deselect the others to test that your search is really working against the old archive, but all new information is being created as KBAs and Notes.)

sybsolvedcases.png

6. Enter your search terms

 

7. Press  "Go"

ASE 15.7 SP132 released on 5 Feb 2015

$
0
0

SAP released ASE 15.7 SP132 on 5 Feb 2015; it is now available for download from A- Z Index | SAP Support Portal

(Click on the  "A" tab, find and click on  "SAP Adaptive Server Enterprise", click on "Sybase ASE 15.7", find the SP132 package)

TMC Bonds Uses SAP ASE Cluster Edition for Uninterrupted Service

$
0
0

Maximum availability in financial markets is critical. Any downtime can make the difference between profit and loss, and being able to recover from a potential disaster is essential. TMC Bonds, a financial services firm in New York, needed a system that would provide the shortest downtime window possible.

 

To achieve this, it implemented SAP ASE Cluster Edition to provide instantaneous failover capabilities in the event of production outages.

The result: system recovery has been reduced from an hour to mere seconds per instance. The new system, which runs on Red Hat Enterprise Linux and an EMC VMAX storage area network, has achieved key benefits for TMC Bonds:

     


89%


Annual increase in trading volume


 



4.5 million


Core transactions per hour


 



2–3 seconds


For system failover, down

from a potential hour


 


TMC relies on SAP to support high performance and future growth; it plans to test the new version of the software as it becomes available.  Read the full customer success story.

Viewing all 151 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>