Wednesday 20 December 2017

Installing Jenkins

The procedures on this page are for new installations of Jenkins on a single/local machine.
Jenkins is typically run as a standalone application in its own process with the built-in Java servlet container/application server (Jetty).
Jenkins can also be run as a servlet in different Java servlet containers such as Apache Tomcat or GlassFish. However, instructions for setting up these types of installations are beyond the scope of this page.
Note: Although this page focusses on local installations of Jenkins, this content can also be used to help set up Jenkins in production environments.

Prerequisites

Minimum hardware requirements:
  • 256 MB of RAM
  • 1 GB of drive space (although 10 GB is a recommended minimum if running Jenkins as a Docker container)
Recommended hardware configuration for a small team:
  • 1 GB+ of RAM
  • 50 GB+ of drive space
Sofware requirements:
  • Java 8 - either a Java Runtime Environment (JRE) or a Java Development Kit (JDK) is fine
    Note: This is not a requirement if running Jenkins as a Docker container.

On macOS and Linux

  1. Open up a terminal window.
  2. Download the jenkinsci/blueocean image and run it as a container in
    Docker using the following docker run command:
    docker run \
      -u root \
      --rm \  
      -d \ 
      -p 8080:8080 \ 
      -v jenkins-data:/var/jenkins_home \ 
      -v /var/run/docker.sock:/var/run/docker.sock \
      jenkinsci/blueocean 
    On Windows
  1. Open up a command prompt window.
  2. Download the jenkinsci/blueocean image and run it as a container in Docker using the following docker run command:
    docker run ^
      -u root ^
      --rm ^
      -d ^
      -p 8080:8080 ^
      -v jenkins-data:/var/jenkins_home ^
      -v /var/run/docker.sock:/var/run/docker.sock ^
      jenkinsci/blueocean
    For an explanation of each of these options, refer to the macOS and Linux instructions above.
  3. Proceed to the Post-installation setup wizard.

Accessing the Jenkins/Blue Ocean Docker container

If you have some experience with Docker and you wish or need to access the jenkinsci/blueocean container through a terminal/command prompt using the docker exec command, you can add an option like --name jenkins-blueocean (with the docker run above), which would give the jenkinsci/blueocean container the name "jenkins-blueocean".
This means you could access the container (through a separate terminal/command prompt window) with a docker exec command like:
docker exec -it jenkins-blueocean bash

macOS

To install from the website, using a package:
Jenkins can also be installed using brew:
  • Install the latest release version
brew install Jenkins
  • Install the LTS version
brew install jenkins-lts

Jenkins

Jenkins is a self-contained, open source automation server which can be used to automate all sorts of tasks related to building, testing, and delivering or deploying software.
Jenkins can be installed through native system packages, Docker, or even run standalone by any machine with a Java Runtime Environment (JRE) installed.

Getting Started with the Guided Tour.
 Prerequisites:

Download and run Jenkins

  • A machine with:
    • 256 MB of RAM, although more than 512MB is recommended
    • 10 GB of drive space (for Jenkins and your Docker image)
  • The following software installed:
    • Java 8 (either a JRE or Java Development Kit (JDK) is fine)
    • Docker (navigate to Get Docker at the top of the website to access the Docker download that’s suitable for your platform)

  1. Download Jenkins.
  2. Open up a terminal in the download directory.
  3. Run java -jar jenkins.war --httpPort=8080.
  4. Browse to http://localhost:8080.
  5. Follow the instructions to complete the installation.
When the installation is complete, you can start putting Jenkins to work!

How to delete a message from JMS queue

connect('weblogic', 'weblogic', 't3://localhost:7003')
serverRuntime()
cd('/JMSRuntime/MS1.jms/JMSServers/dizzyworldJMSServer/Destinations/DizzyworldJMSModule!dizzyworldqueue')
cmo.deleteMessages('')

Changing password for existing user (passwordchange.py)

DomainName = "base_domain"

ADMINUrl = "t3://localhost:7001"

ADMINUser = "pavan"

oldPassword = "pavan123"

newPassword = "pavan456"

print  '*****************'
connect(ADMINUser,oldPassword,ADMINUrl)

cd('/SecurityConfiguration/'+DomainName+'/Realms/myrealm/AuthenticationProviders/DefaultAuthenticator')

cmo.resetUserPassword(ADMINUser,newPassword)

print  '*****************'

disconnect()

print '*** connecting with new password***......................................'


connect(ADMINUser,newPassword,ADMINUrl)

Creating users and assigning to groups (usercreation.py)

connect('weblogic','weblogic','t3://localhost:7001')
serverConfig()
cd('/SecurityConfiguration/base_domain/Realms/myrealm/AuthenticationProviders/DefaultAuthenticator')
cmo.createUser('pavan','pavan123',"")
cmo.addMemberToGroup('Administrators','pavan')
disconnect()
exit()

Starting admin server using Nodemanger/ From remote location

Open nodemanager.properties file which is located at /home/bea/weblogic91/common/nodemanager  and change the below entry
SecureListener=false

Loging to console : Navigate to Machines->Machine-1->Node Manager-> Type : Plain



Step 1) cd /home/bea/weblogic92/common/bin
           ./wlst.sh
Step 2) Start Node manager :
           i) /home/bea/weblogic92/server/bin/startNodeManager.sh
            or
           ii)  startNodeManager(verbose='true', NodeManagerHome='/home/bea/weblogic92/common/nodemanager',ListenPort='5556', ListenAddress='localhost')
Step 3) Connect to Nodemanager
        wls:/offline> nmConnect('weblogic', 'weblogic', 'localhost', '5556', 'dev_domain','/home/bea/user_projects/domains/dev_domain','plain')
Connecting to Node Manager ...
Successfully Connected to Node Manager.

Step 4) start AdminServer
     wls:/nm/dev_domain>  prps = makePropertiesObject('weblogic.ListenPort=7001')
wls:/nm/dev_domain>  nmStart('AdminServer',props=prps)
Starting server AdminServer ...
Successfully started server AdminServer ...

Checking the server status using nodemanager


wls:/nm/dev_domain> nmServerStatus('AdminServer')

RUNNING


To Kill server using Nodemanager

wls:/nm/dev_domain> nmKill('AdminServer')
Killing server AdminServer ...
Successfully killed server AdminServer ...


To check nodemanager version

wls:/nm/dev_domain> nmVersion()
The Node Manager version that you are currently connected to is 9.2.3.0.

To stop nodemanger

stopNodeManager()

Disconnect from nodemanger


nmDisconnect()

Checking the server health using WLST

connect('weblogic','weblogic','t3://localhost:7001')
domainRuntime()
cd('ServerRuntimes')
servers=domainRuntimeService.getServerRuntimes()
for server in servers:
        serverName=server.getName();
        print '**************************************************\n'
        print '##############   ',serverName,    '###############'
        print '**************************************************\n'
        print '##### Server State           #####', server.getState()
        print '##### Server ListenAddress   #####', server.getListenAddress()
        print '##### Server ListenPort      #####', server.getListenPort()
        print '##### Server Health State    #####', server.getHealthState()
exit()


Running WLST in Online mode:
wls:/(offline)> connect('username','password', 't3://localhost:7001')
Connecting to weblogic server instance running at t3://localhost:7001 as username weblogic ...
Successfully connected to Admin Server 'myserver' that belongs to domain 'mydomain'.

wls:/mydomain/serverConfig>

Creating an script:

middleware_home=' /usr/weblogic/wlserver '
# Open a domain template.
readTemplate (middleware_home + '/common/templates/domains/wls.jar')
cd('Servers/AdminServer')
set('ListenPort', 7001 )
set('ListenAddress','192.168.10.1')
create('AdminServer','SSL')
cd('SSL/AdminServer')
set('Enabled', 'True')
set('ListenPort', 7002)
cd('/')
cd('Security/base_domain/User/WebLogic')
cmo.setName('WebLogic')
cmo.setPassword('secretPassword123 ')
setOption('OverwriteDomain', 'true')
setOption('ServerStartMode', 'prod')
writeDomain(domaintarget)
closeTemplate()



Creating Domain using WLST

print "Reading existing domain....."
readDomain('/home/bea/user_projects/domains/prod_domain')
print "Writing the existing domain into template file...."
writeTemplate('/home/bea/user_projects/domains/prod_new_domain.jar')
print "closing the Domain ..........."
closeDomain()
print "Creating new domain with newly created template file.............."
createDomain('/home/bea/user_projects/domains/prod_new_domain.jar','/home/bea/user_projects/domains/prod_new_domain','weblogic','weblogic')

Deployed Application status using wlst

print "deployed application status"
connect('weblogic','weblogic','t3://localhost:8001')
print "**************Deployed application status****************"
ls('AppDeployments')
print "*********************************************************"
disconnect()
exit()

Undeploying Application using wlst

./wlst.sh undeploy.py

connect('weblogic','weblogic','t3://localhost:8001')
print "undeploying application........."
undeploy('benefits')
print "........................."
disconnect()
exit()

Deployment using WLST.

deploy.py 

print '***********************************************************************'
connect('weblogic','weblogic','t3://localhost:7001')
print '***********************************************************************'
edit()
print '***********************************************************************'
startEdit()
print '***********************************************************************'
print '***********************************************************************'
deploy('benefits','/home/application/benefits.war',targets="ms1,ms2")
print '***********************************************************************'
save()
print '***********************************************************************'
activate()
print '***********************************************************************'
disconnect()

Tuesday 20 June 2017

Message Type Patterns

The message itself is simply some sort of data structure—such as a string, a byte array, a record, or an object. It can be interpreted simply as data, as the description of a command to be invoked on the receiver, or as the description of an event that occurred in the sender. Sender can send a Command Message, specifying a function or method on the receiver that the sender wishes to invoke. It can send a Document Message, enabling the sender to transmit one of its data structures to the receiver. Or it can send an Event Message, notifying the receiver of a change in the sender. 
The following message type patterns can commonly be used in SOA. 

Messaging Channel Patterns

Channels, also known as queues, are logical pathways to transport messages. A channel behaves like a collection or array of messages, but one that is magically shared across multiple computers and can be used concurrently by multiple applications. 
A service provider is a program that sends a message by writing the message to a channel. A consumer receives a message from a channel. There are different kinds of messaging channels available. 

Point-to-Point Channel


point-to-point channel ensures that only one consumer consumes any given message. If the channel has multiple receivers, only one of them can successfully consume a particular message. If multiple receivers try to consume a single message, the channel ensures that only one of them succeeds, so the receivers do not have to coordinate with each other. The channel can still have multiple consumers to consume multiple messages concurrently, but only a single receiver consumes any one message. 

Publish-Subscribe Channel

publish-subscribe channel that is developed based on Observer pattern [9], and describes a single input channel that splits into multiple output channels—one for each subscriber. After publishing an event into the publish-subscribe channel, the same message is delivered to each of the output channels. Each output channel is configured on one-to-one topology to allow only one consumer to consume a message. The event is considered consumed only when all of the consumers have been notified. 
publish-subscribe channel can be a useful for systems management, error debugging and different level of testing. 

Datatype Channel

In any messaging system there are several separate datatype channels for each type of data. All of the messages on a given channel will contain the same type of data. Based on data type, the service provider sends the data to the channel and the consumer receives data from the appropriate datatype channel

Dead Letter Channel

dead letter channel is a separate channel dedicated for bad messages or invalid messages. From this channel messages can be rerouted to the mainstream channel or even in separate channel for special processing of the message. 

Guaranteed Delivery

This guaranteed delivery mechanism increases system reliability, but at the expense of performance as it involves considerable numbers of I/O and consumes a large amount of disk space. Therefore if performance or debugging/testing is the priority try to avoid using guaranteed delivery

Message Bus

message bus is a combination of a common data model, a common command set, and a messaging infrastructure to allow different heterogeneous systems to communicate through a shared set of interfaces.
message bus can be considered as a universal connector between the various enterprise systems, and as a universal interface for client applications that wish to communicate with each other. A message bus requires that all of the applications should use the same canonical data model. Applications adding messages to the bus may need to depend on message routers to route the messages to the appropriate final destinations. 

Message Routing Patterns

Almost all messaging system uses built in router as well as customized routing. Message Routers are very important building blocks for any good integration architecture. As opposed to the different message routing design patterns, this pattern describes a hub-and-spoke architectural style with few specially embedded routing logic. 

In Search of the Right Router

An important decision for an architect is to choose the appropriate routing mechanism. Patterns that will help you make the right decision are: 
  • Pipes and filter
  • Content-based router
  • Content aggregator

Pipes and Filter

The pipes and filters pattern uses abstract pipes to decouple components from each other. The pipe allows one component to send a message into the pipe so that it can be consumed later by another process that is unknown to the component. One of the potential downsides of pipes and filters architecture is the larger number of required channels that consume memory and CPU cycles. Also, publishing a message to a channel involves a certain amount of overhead because the data has to be translated from the application-internal format into the messaging infrastructure's own format. 
Using pipes and filters also improves module-wise unit testing ability. It can help to prepare a testing framework. It is more efficient to test and debug each core function in isolation because we can tailor the test mechanism to the specific function. 

Content-Based Router

The content-based router examines the message content and routes the message onto a different channel based on message data content. When implementing a content-based router, special caution should be taken to make easily maintainable routing logic. In more sophisticated integration scenarios, the content-based router can be implemented as a configurable rules engine that computes the destination channel based on a set of configurable rules. 

Content Aggregator

content aggregator is a special filter that receives a stream of messages and correlates related messages. When a complete set of messages has been received, the aggregator collects information from each correlated message and publishes a single, aggregated message to the output channel for further processing. Therefore, aggregator has to be stateful, because it needs to save the message state with processing state until the complete formation of the aggregation. 
When designing an aggregator, we need to specify the following items: 
  • Correlation ID. An identifier that indicates messages internal relationship
  • End condition. The condition that determines when to stop processing
  • Aggregation algorithm. The algorithm used to combine the received messages into a single output message
Every time the content aggregator receives a new message, it checks whether the message is a part of already existing aggregate or a new aggregate. After adding the message, the content aggregatorevaluates the process end condition for the aggregate. If the condition evaluates to true, a new aggregated message is formed from the aggregate and published to the output channel. If the process end condition evaluates to false, no message is published and the content aggregator continues processing. 

Service Consumer Patterns

There are several possible types of Service Consumer. In this pattern catalogue we will present a set of consumer patterns. 

Transactional Client

With a transactional receiver, messages can be received without actual removal of the message from the channel. The advantage of this approach is that if the application crashed at this point, the message would still be on the queue after message recovery has been performed; the message would not be lost. After the message processing is finished, and on successful transaction commit, the message is removed from the channel. 

Polling Consumer

A polling consumer is a message receiver. A polling consumer restricts the number of concurrent messages to be consumed by limiting the number of polling threads. In this way, it prevents the application from being blocked by having to process too many requests, and keeps any extra messages queued up until the receiver can process them. 

Event-Driven Consumer

An event-driven consumer is invoked by the messaging system at the time of message arrival on the consumer's channel. The consumer uses application-specific callback mechanism to pass the message to the application.

Durable Subscriber

A durable subscription saves messages for an off-line subscriber and ensures message delivery when the subscriber reconnects. Thus it prevents published messages from getting lost and ensures guaranteed delivery. A durable subscription has no effect on the normal behavior of the online/active subscription mechanism. 

Idempotent Receiver

The term idempotent is originated from mathematics to describe the ability of a function that produces the same result if it is applied to itself, i.e. f(x) = f(f(x)). In messaging Environment this concept ensures safely resent of same message irrespective of receipt of same message multiple times. 
In order to detect and eliminate duplicate messages based on the message identifier, the message consumer has to maintain a buffer of already received message identifiers. One of the key design issues is to decide message persisting timeout. In the simplest case, the service provider sends one message at a time, awaiting the receiver's acknowledgment after every message. In this scenario, the consumer efficiently uses the message identifier to check that the identifiers are identical. In practice, this style of communication is very inefficient, especially when significant throughput is required. In these situations, the sender might want to send a whole set of messages in a batch mode without awaiting acknowledgment for individual one. This will necessitate keeping a longer history of identifiers for already received messages, and the size of the message subscriber's buffer will grow significantly depending on the number of message the sender can send without an acknowledgment. 
An alternative approach to achieving idempotency is to define the semantics of a message such that resending the message does not impact the system. For example, rather than defining a message as variable equation like 'Add 0.3% commission to the Employee code A1000 having a base salary of $10000', we could change the message to 'Set the commission amount $300.00 to the Employee code A1000 having a base salary of $10000'. Both messages achieve the same result—even if the current commission is $300. The second message is idempotent because receiving it twice will not have any effect. So whenever possible, try to send constants as message and avoid variables in messages. In this way we can efficiently achieve idempotency. 

Service Factory

service factory may return a simple method call or even a complex remote process invocation. The service factory invokes the service just like any other method invocation and optionally can create a customized reply message. 

Message Facade Pattern

A message facade can be used asynchronously and maintained independently. It acts as an interceptor between service consumer and service provider. 

Basic Unix Commands

1) login to the unix/solaris server

 username: aaa passwd : *******
Check default home_directory.
it shows the present working directory
previous directory
create new directory
shows last 10 lines of file
creates link of file1 to file2
acrawley.html:  <>
admin:          aaaa
afiedt.buf:     <>
autosys_env_IBKNYR1:   <>
clears the screen

2) pwd

3)ls -l 
shows  listing of the files in present directory

4) cd ..

5)mkdir <directory>

6) mkdir -p /home/a/b/c
will create all the non-existing directories 

7)vi <xxxx>
opens file for reading or editing

8)cat <XXX>
display contents of file

9)more <file_XXX>
displays page by page data

10) tail <xxxx>

11) touch <file_name>
creates dummy file

12)ln file1 file2

13) file <file_name>
shows what type of file it is like
$ file *

14) cd /home/<directory_name>
To /home/<directory_name> directory likewise you can give and directory

15)clear
clears All the data 

MD 50 and MD 70 Documents - Oracle application development

Documentation plays a very  important role in oracle apps implementation projects.Oracle as part of the AIM   Methodology has provided certain jargon's for various kinds of documents at different stages of the project.So here is a blog post detailing the same:


BR Documents: Business Requirement Documents,Which is mostly done by the Functional persons of the implementation Team Like Functional Project Leads/Managers.These documents are the set up Documents,which is 100% based on the BR120-Business Requirement Gatherings as provided by the business.
You can say these are as is process.So BR 100 is the To Be Process after you gather all sorts of info from the Biz and map in the Oracle Systems.

MD Documents:-Modular Designing Documents,which are is mostly done by the Technical persons of the Implementation Team like Technical Project Leads/Project Manager. These document are the Design Documents,which is again based on the BR120-Business Requirement Gathering as provided by the business.
     These MD's are of basically discussed on any customization needs or any special behavior oracle System should work which is not the standard Oracle Functionality.These also discuss about the tables and the interface tables or forms which are going  to be used in the particular modules.It also discusses about the High Level Designs Like Flows of the Business and all.
These MD's are basically made after you all Functional Design and if there is no work around Oracle System provides for a particular Test Scenario and there is no other way other than to go for the Customization. 

Now you can understand the basic difference b/w these two.
Here is the Overall flow with various documents involved in every Stage:
1st Stage:-Analysis
Document used: 
RD 120-Requirement Gathering -As Is To Be Process
BR150 - Fit GAP Analysis (It's been done on the basis of above doc)
2nd Stage: Designing
Document Used:
BR 100: Set up Document for the Functional Consultants
MD 200: Set up Document for the Technical Consultants
3rd Stage: Build -DEMO/ PROTO TYPE
Document Used:
BR 050
4th Stage:Testing
Document Used:
TE 050
5th Stage : GO Live
6th Stage: Post Production.