Tuesday 20 June 2017

Message Type Patterns

The message itself is simply some sort of data structure—such as a string, a byte array, a record, or an object. It can be interpreted simply as data, as the description of a command to be invoked on the receiver, or as the description of an event that occurred in the sender. Sender can send a Command Message, specifying a function or method on the receiver that the sender wishes to invoke. It can send a Document Message, enabling the sender to transmit one of its data structures to the receiver. Or it can send an Event Message, notifying the receiver of a change in the sender. 
The following message type patterns can commonly be used in SOA. 

Messaging Channel Patterns

Channels, also known as queues, are logical pathways to transport messages. A channel behaves like a collection or array of messages, but one that is magically shared across multiple computers and can be used concurrently by multiple applications. 
A service provider is a program that sends a message by writing the message to a channel. A consumer receives a message from a channel. There are different kinds of messaging channels available. 

Point-to-Point Channel


point-to-point channel ensures that only one consumer consumes any given message. If the channel has multiple receivers, only one of them can successfully consume a particular message. If multiple receivers try to consume a single message, the channel ensures that only one of them succeeds, so the receivers do not have to coordinate with each other. The channel can still have multiple consumers to consume multiple messages concurrently, but only a single receiver consumes any one message. 

Publish-Subscribe Channel

publish-subscribe channel that is developed based on Observer pattern [9], and describes a single input channel that splits into multiple output channels—one for each subscriber. After publishing an event into the publish-subscribe channel, the same message is delivered to each of the output channels. Each output channel is configured on one-to-one topology to allow only one consumer to consume a message. The event is considered consumed only when all of the consumers have been notified. 
publish-subscribe channel can be a useful for systems management, error debugging and different level of testing. 

Datatype Channel

In any messaging system there are several separate datatype channels for each type of data. All of the messages on a given channel will contain the same type of data. Based on data type, the service provider sends the data to the channel and the consumer receives data from the appropriate datatype channel

Dead Letter Channel

dead letter channel is a separate channel dedicated for bad messages or invalid messages. From this channel messages can be rerouted to the mainstream channel or even in separate channel for special processing of the message. 

Guaranteed Delivery

This guaranteed delivery mechanism increases system reliability, but at the expense of performance as it involves considerable numbers of I/O and consumes a large amount of disk space. Therefore if performance or debugging/testing is the priority try to avoid using guaranteed delivery

Message Bus

message bus is a combination of a common data model, a common command set, and a messaging infrastructure to allow different heterogeneous systems to communicate through a shared set of interfaces.
message bus can be considered as a universal connector between the various enterprise systems, and as a universal interface for client applications that wish to communicate with each other. A message bus requires that all of the applications should use the same canonical data model. Applications adding messages to the bus may need to depend on message routers to route the messages to the appropriate final destinations. 

Message Routing Patterns

Almost all messaging system uses built in router as well as customized routing. Message Routers are very important building blocks for any good integration architecture. As opposed to the different message routing design patterns, this pattern describes a hub-and-spoke architectural style with few specially embedded routing logic. 

In Search of the Right Router

An important decision for an architect is to choose the appropriate routing mechanism. Patterns that will help you make the right decision are: 
  • Pipes and filter
  • Content-based router
  • Content aggregator

Pipes and Filter

The pipes and filters pattern uses abstract pipes to decouple components from each other. The pipe allows one component to send a message into the pipe so that it can be consumed later by another process that is unknown to the component. One of the potential downsides of pipes and filters architecture is the larger number of required channels that consume memory and CPU cycles. Also, publishing a message to a channel involves a certain amount of overhead because the data has to be translated from the application-internal format into the messaging infrastructure's own format. 
Using pipes and filters also improves module-wise unit testing ability. It can help to prepare a testing framework. It is more efficient to test and debug each core function in isolation because we can tailor the test mechanism to the specific function. 

Content-Based Router

The content-based router examines the message content and routes the message onto a different channel based on message data content. When implementing a content-based router, special caution should be taken to make easily maintainable routing logic. In more sophisticated integration scenarios, the content-based router can be implemented as a configurable rules engine that computes the destination channel based on a set of configurable rules. 

Content Aggregator

content aggregator is a special filter that receives a stream of messages and correlates related messages. When a complete set of messages has been received, the aggregator collects information from each correlated message and publishes a single, aggregated message to the output channel for further processing. Therefore, aggregator has to be stateful, because it needs to save the message state with processing state until the complete formation of the aggregation. 
When designing an aggregator, we need to specify the following items: 
  • Correlation ID. An identifier that indicates messages internal relationship
  • End condition. The condition that determines when to stop processing
  • Aggregation algorithm. The algorithm used to combine the received messages into a single output message
Every time the content aggregator receives a new message, it checks whether the message is a part of already existing aggregate or a new aggregate. After adding the message, the content aggregatorevaluates the process end condition for the aggregate. If the condition evaluates to true, a new aggregated message is formed from the aggregate and published to the output channel. If the process end condition evaluates to false, no message is published and the content aggregator continues processing. 

Service Consumer Patterns

There are several possible types of Service Consumer. In this pattern catalogue we will present a set of consumer patterns. 

Transactional Client

With a transactional receiver, messages can be received without actual removal of the message from the channel. The advantage of this approach is that if the application crashed at this point, the message would still be on the queue after message recovery has been performed; the message would not be lost. After the message processing is finished, and on successful transaction commit, the message is removed from the channel. 

Polling Consumer

A polling consumer is a message receiver. A polling consumer restricts the number of concurrent messages to be consumed by limiting the number of polling threads. In this way, it prevents the application from being blocked by having to process too many requests, and keeps any extra messages queued up until the receiver can process them. 

Event-Driven Consumer

An event-driven consumer is invoked by the messaging system at the time of message arrival on the consumer's channel. The consumer uses application-specific callback mechanism to pass the message to the application.

Durable Subscriber

A durable subscription saves messages for an off-line subscriber and ensures message delivery when the subscriber reconnects. Thus it prevents published messages from getting lost and ensures guaranteed delivery. A durable subscription has no effect on the normal behavior of the online/active subscription mechanism. 

Idempotent Receiver

The term idempotent is originated from mathematics to describe the ability of a function that produces the same result if it is applied to itself, i.e. f(x) = f(f(x)). In messaging Environment this concept ensures safely resent of same message irrespective of receipt of same message multiple times. 
In order to detect and eliminate duplicate messages based on the message identifier, the message consumer has to maintain a buffer of already received message identifiers. One of the key design issues is to decide message persisting timeout. In the simplest case, the service provider sends one message at a time, awaiting the receiver's acknowledgment after every message. In this scenario, the consumer efficiently uses the message identifier to check that the identifiers are identical. In practice, this style of communication is very inefficient, especially when significant throughput is required. In these situations, the sender might want to send a whole set of messages in a batch mode without awaiting acknowledgment for individual one. This will necessitate keeping a longer history of identifiers for already received messages, and the size of the message subscriber's buffer will grow significantly depending on the number of message the sender can send without an acknowledgment. 
An alternative approach to achieving idempotency is to define the semantics of a message such that resending the message does not impact the system. For example, rather than defining a message as variable equation like 'Add 0.3% commission to the Employee code A1000 having a base salary of $10000', we could change the message to 'Set the commission amount $300.00 to the Employee code A1000 having a base salary of $10000'. Both messages achieve the same result—even if the current commission is $300. The second message is idempotent because receiving it twice will not have any effect. So whenever possible, try to send constants as message and avoid variables in messages. In this way we can efficiently achieve idempotency. 

Service Factory

service factory may return a simple method call or even a complex remote process invocation. The service factory invokes the service just like any other method invocation and optionally can create a customized reply message. 

Message Facade Pattern

A message facade can be used asynchronously and maintained independently. It acts as an interceptor between service consumer and service provider. 

Basic Unix Commands

1) login to the unix/solaris server

 username: aaa passwd : *******
Check default home_directory.
it shows the present working directory
previous directory
create new directory
shows last 10 lines of file
creates link of file1 to file2
acrawley.html:  <>
admin:          aaaa
afiedt.buf:     <>
autosys_env_IBKNYR1:   <>
clears the screen

2) pwd

3)ls -l 
shows  listing of the files in present directory

4) cd ..

5)mkdir <directory>

6) mkdir -p /home/a/b/c
will create all the non-existing directories 

7)vi <xxxx>
opens file for reading or editing

8)cat <XXX>
display contents of file

9)more <file_XXX>
displays page by page data

10) tail <xxxx>

11) touch <file_name>
creates dummy file

12)ln file1 file2

13) file <file_name>
shows what type of file it is like
$ file *

14) cd /home/<directory_name>
To /home/<directory_name> directory likewise you can give and directory

15)clear
clears All the data 

MD 50 and MD 70 Documents - Oracle application development

Documentation plays a very  important role in oracle apps implementation projects.Oracle as part of the AIM   Methodology has provided certain jargon's for various kinds of documents at different stages of the project.So here is a blog post detailing the same:


BR Documents: Business Requirement Documents,Which is mostly done by the Functional persons of the implementation Team Like Functional Project Leads/Managers.These documents are the set up Documents,which is 100% based on the BR120-Business Requirement Gatherings as provided by the business.
You can say these are as is process.So BR 100 is the To Be Process after you gather all sorts of info from the Biz and map in the Oracle Systems.

MD Documents:-Modular Designing Documents,which are is mostly done by the Technical persons of the Implementation Team like Technical Project Leads/Project Manager. These document are the Design Documents,which is again based on the BR120-Business Requirement Gathering as provided by the business.
     These MD's are of basically discussed on any customization needs or any special behavior oracle System should work which is not the standard Oracle Functionality.These also discuss about the tables and the interface tables or forms which are going  to be used in the particular modules.It also discusses about the High Level Designs Like Flows of the Business and all.
These MD's are basically made after you all Functional Design and if there is no work around Oracle System provides for a particular Test Scenario and there is no other way other than to go for the Customization. 

Now you can understand the basic difference b/w these two.
Here is the Overall flow with various documents involved in every Stage:
1st Stage:-Analysis
Document used: 
RD 120-Requirement Gathering -As Is To Be Process
BR150 - Fit GAP Analysis (It's been done on the basis of above doc)
2nd Stage: Designing
Document Used:
BR 100: Set up Document for the Functional Consultants
MD 200: Set up Document for the Technical Consultants
3rd Stage: Build -DEMO/ PROTO TYPE
Document Used:
BR 050
4th Stage:Testing
Document Used:
TE 050
5th Stage : GO Live
6th Stage: Post Production.

BPEL Persistence Properties

  BPEL Persistence Properties – 11g

BPEL Persistence properties are used to control, when a process need to dehydrate. Below are the properties which we can use to control it for BPEL Component in a Composite.
InMemoryOptimization
This property indicates to Oracle BPEL Server that this process is a transient process and dehydration of the instance is not required. When set to true, Oracle BPEL Server keeps the instances of this process in memory only during the course of execution. This property can only be set to true for transient processes (process type does not incur any intermediate dehydration points during execution).
  • false (default): instances are persisted completely and recorded in the dehydration store database for a synchronous BPEL process.
  • true: Oracle BPEL Process Manager keeps instances in memory only.
CompletionPersistPolicy
This property controls if and when to persist instances. If an instance is not saved, it does not appear in Oracle BPEL Console. This property is applicable to transient BPEL processes (process type does not incur any intermediate dehydration points during execution).
This property is only used when inMemoryOptimization is set to true.
This parameter strongly impacts the amount of data stored in the database (in particular, the cube_instance, cube_scope, and work_item tables). It can also impact throughput.
  • on (default): The completed instance is saved normally.
  • deferred: The completed instance is saved, but with a different thread and in another transaction, If a server fails, some instances may not be saved.
  • faulted: Only the faulted instances are saved.
  • off: No instances of this process are saved.
<component name="mybpelproc">
...
<property name="bpel.config.completionPersistPolicy">faulted</property>
<property name="bpel.config.inMemoryOptimization">true</property>
...
</component>
OneWayDeliveryPolicy
This property controls database persistence of messages entering Oracle BPEL Server. Its used when we need to have a sync-type call based on a one way operation. This is mainly used when we need to make an adapter synchronous to the BPEL Process.
By default, incoming requests are saved in the following delivery service database tables: dlv_message
  • async.persist: Messages are persisted in the database.
  • sync.cache: Messages are stored in memory.
  • sync: Direct invocation occurs on the same thread.
<component name="UnitOfOrderConsumerBPELProcess">
...
<property name="bpel.config.transaction" >required</property>
<property name="bpel.config.oneWayDeliveryPolicy">sync</property>
...
</component>
General Recommendations:
1. If your Synchronous process exceed, say 1000 instances per hour, then its better to set inMemoryOptimization to true and completionPersistPolicy to faulted, So that we can get better throughput, only faulted instances gets dehydrated in the database, its goes easy on the purge (purging historical instance data from database)
2. Do not include any settings to persist your process such as (Dehydrate, mid process receive, wait or Onmessage)
3. Have good logging on your BPEL Process, so that you can see log messages in the diagnostic log files for troubleshooting.

Some Important join conditions in Oracle apps

GL AND AP    
GL_CODE_COMBINATIONS    AP_INVOICES_ALL
code_combination_id = acct_pay_code_combination_id

GL_CODE_COMBINATIONS    AP_INVOICES_DISTRIBUTIONS_ALL
code_combination_id = dist_code_combination_id

GL_SETS_OF_BOOKS AP_INVOICES_ALL
set_of_books_id = set_of_books_id

GL AND AR
GL_CODE_COMBINATIONS    RA_CUST_TRX_LINE__GL_DIST_ALL
code_combination_id = code_combination_id

GL AND INV
GL_CODE_COMBINATIONS    MTL_SYSTEM_ITEMS_B
code_combination_id = cost_of_sales_account

GL AND PO
GL_CODE_COMBINATIONS    PO_DISTRIBUTIONS_ALL
code_combination_id = code_combination_id

PO AND AP
PO_DISTRIBUTIONS_ALL      AP_INVOICE_DISTRIBUTIONS_ALL
Po_distribution_id = po_distribution_id

PO_VENDORS           AP_INVOICES_ALL
vendor_id = vendor_id

PO AND SHIPMENTS
PO_HEADERS_ALL     RCV_TRANSACTIONS
Po_header_id = po_header_id

PO_DISTRIBUTIONS_ALL      RCV_TRANSACTIONS
Po_distribution_id = po_distribution_id

SHIPMENTS AND AP INVOICE
RCV_TRANSACTIONS           AP_INVOICE_DISTRIBUTIONS_ALL
RCV_TRANSACTION_ID = RCV_TRANSACTION_ID

PO AND  INV
PO_REQUISITION_LINES_ALL MTL_SYSTEM_ITEMS_B
item_id = inventory_item_id
org_id = organization_id

PO AND HRMS
PO_HEADERS_ALL     HR_EMPLOYEES
Agent_id = employee_id

PO AND REQUISITION
PO_DISTRIBUTIONS_ALL      PO_REQ_DISTRIBUTIONS_ALL
req_distribution_id = distribution_id

SHIPMENTS AND INV
RCV_TRANSACTIONS           MTL_SYSTEM_ITEMS_B
Organization_id         =        organization_id

INV AND HRMS
MTL_SYSTEM_ITEMS_B       HR_EMPLOYEES
buyer_id        =        employee_id

OM  AND  AR
OE_ORDER_HEADERS_ALL              RA_CUSTOMER_TRX_LINES_ALL
TO_CHAR( Order_number)    =        interface_line_attribute1

OE_ORDER_LINES_ALL                   RA_CUSTOMER_TRX_LINES_ALL
TO_CHAR(Line_id)     =        interface_line_attribute6  

OE_ORDER_LINES_ALL                   RA_CUSTOMER_TRX_LINES_ALL
reference_customer_trx_line_id       =        customer_trx_line_id

OM AND SHIPPING
OE_ORDER_HEADERS_ALL               WSH_DELIVARY_DETAILS
HEADER_ID     =        SOURCE_HEADER_ID

OE_ORDER_HEADERS_ALL              WSH_DELIVARY_DETAILS
LINE_ID         =        SOURCE_LINE_ID

AP AND AR (BANKS)
AR_CASH_RECEIPTS_ALL               AP_BANK_ACCOUNTS
REMITTANCE_BANK_ACCOUNT_ID    =        ABA.BANK_ACCOUNT_ID

AP AND AR
HZ_PARTIES                      AP_INVOICES_ALL
PARTY_ID      =        PARTY_ID

OM AND CRM
OE_ORDER_LINES_ALL         CSI_ITEM_INSTANCES(Install Base)

LINE_ID         =        LAST_OE_ORDER_LINE_ID

What is BPEL and Activities

BPEL Introduction:

Oracle BPEL Process Manager:

 BPEL stands for Business Process Execution Language. It is a XML based declarative language that can be used implement end to end business processes. Basic building block of these processes is a service, which could be a web service. BPEL utilizes various adapters to service enable legacy and custom applications before consuming them in processes. BPEL also provides human workflow that has variety of uses.

BPEL Provide end to end business process by using BPEL activities Like 

Assign activity   2)Switch activity   3)Receive activity   4 )Reply activity
Flow activity      6) FlowN activity   7) Invoke activity     8) Pick activity
Transform        10) Scope activity  11) Email activity     12) Throw activity
Wait activity     14)  While activity  

Assign Activity

This activity provides a method for data manipulation, such as copying the contents of one variable to another. Copy operations enable you to transfer information between variables, expressions, endpoints, and other elements.

Receive Activity

This activity specifies the partner link from which to receive information and the port type and operation for the partner link to invoke. This activity waits for an asynchronous callback response message from a service, such as a loan application approver service. While the BPEL process is waiting, it is dehydrated (compressed and stored) until the callback message arrives. The contents of this response are stored in a response variable in the process.
The receive activity supports the bpelx:property extensions that facilitate the passing of properties through the SOAP header, and the obtaining of SOA runtime system properties for useful information such as tracking.compositeInstanceId and tracking.conversationId.   

Flow Activity

This activity enables you to specify one or more activities to be performed concurrently. A flow activity completes when all activities in the flow have finished processing. Completion of a flow activity includes the possibility that it can be skipped if its enabling condition is false.
For example, assume you use a flow activity to enable two loan offer providers (United Loan service and Star Loan service) to start in parallel. In this case, the flow activity contains two parallel activities – the sequence to invoke the United Loan service and the sequence to invoke the Star Loan service. Each service can take an arbitrary amount of time to complete their loan processes.

FlowN Activity

This activity enables you to create multiple flows equal to the value of N, which is defined at runtime based on the data available and logic within the process. An index variable increments each time a new branch is created, until the index variable reaches the value of N.

Pick Activity

This activity waits for the occurrence of one event in a set of events and performs the activity associated with that event. The occurrence of the events is often mutually exclusive (the process either receives an acceptance or rejection message, but not both). If multiple events occur, the selection of the activity to perform depends on which event occurred first. If the events occur nearly simultaneously, there is a race and the choice of activity to be performed is dependent on both timing and implementation.

Invoke Activity

This activity enables you to specify an operation you want to invoke for the service (identified by its partner link). The operation can be one-way or request-response on a port provided by the service. You can also automatically create variables in an invoke activity. An invoke activity invokes a synchronous web service or initiates an asynchronous web service.
The invoke activity opens a port in the process to send and receive data. It uses this port to submit required data and receive a response. For synchronous callbacks, only one port is needed for both the send and the receive functions.
The invoke activity supports the bpelx:inputProperty and bpelx:outputProperty that facilitate the passing of properties through the SOAP header and the obtaining of SOA runtime system properties for useful information such as the tracking.compositeInstanceId and tracking.conversationId.

Reply Activity

This activity allows the process to send a message in reply to a message that was received through a receive [activity. The combination of a receive activity and a reply activity forms a request-response operation on the WSDL port type for the process

Switch Activity

This activity consists of an ordered list of one or more conditional branches defined in a case branch, followed optionally by an otherwise branch. The branches are considered in the order in which they appear. The first branch whose condition is true is taken and provides the activity performed for the switch. If no branch with a condition is taken, then the otherwise branch is taken. If the otherwise branch is not explicitly specified, then an otherwise branch with an empty activity is assumed to be available. The switch activity is complete when the activity of the selected branch completes.
A switch activity differs in functionality from a flow activity. For example, a flow activity enables a process to gather two loan offers at the same time, but does not compare their values. To compare and make decisions on the values of the two offers, a switch activity is used. The first branch is executed if a defined condition (inside the case branch) is met. If it is not met, the otherwise branch is executed.

Transform Activity

This activity enables you to create a transformation that maps source elements to target elements (for example, incoming purchase order data into outgoing purchase order acknowledgment data).
Figure A-42 shows the Transform dialog in BPEL 1.1. This dialog enables you to perform the following tasks:
Define the source and target variables and parts to map.
Specify the transformation mapper file.
Click the second icon (the Add icon) to the right of the Mapper File field to access the XSLT Mapper for creating a new XSL file for graphically mapping source and target elements. Click the Edit icon (third icon) to edit an existing XSL file.
Figure A-42 Transform Dialog


Compensate Activity

This activity invokes compensation on an inner scope activity that has successfully completed. This activity can be invoked only from within a fault handler or another compensation handler. Compensation occurs when a process cannot complete several operations after completing others. The process must return and undo the previously completed operations. For example, assume a process is designed to book a rental car, a hotel, and a flight. The process books the car and the hotel, but cannot book a flight for the correct day. In this case, the process performs compensation by unbooking the car and the hotel.The compensation handler is invoked with the compensate activity, which names the scope on which the compensation handler is to be invoked.
This activity invokes compensation on an inner scope activity that has successfully completed. This activity can be invo
Scope Activity:

The scope activity partitions a BPEL business process into logically organized sections. It provides context for variables, fault handling, Compensation, event handling, and Correlation sets. It is a structured activity that contains one other activity, which may itself contain other activities.