practicing techie
tech oriented notes to self and lessons learned
JavaOne 2012 – on picking sessions and other practicalities
2012-10-05
Posted by on To make your attendance succeed and ensure efficient use of your time, you need to think about quite a few practicalities in advance when attending a large conference, such as the JavaOne. This can take up a surprising amount of time and effort that you should account for. The practicalities you need to think of include travel arrangements, accommodation but also picking the sessions to attend.
With many parallel sessions, choosing between them can be tricky and sometimes prioritization is not easy. For instance during JavaOne there’re multiple conferences being held, such as Oracle OpenWorld and MySQL Connect, which all have interesting sessions. Here’re some numbers for JavaOne:
- nearly 500 sessions total
- as much as 18 parallel sessions during peak hours
- over 150 sessions on peak days
So, picking your sessions can be challenging. Possible selection strategies may include:
- short vs. long term benefit – learn about things you know you need now or things you think may be useful in the future
- trend spotting – try to learn what’s in and what’s out. Which technologies people are using, for what kinds of problems and domains, and which solutions are yesterday’s news.
- what’s available – sometimes there aren’t any interesting session at a given time or the interesting ones may have filled up
- semi-random – you can’t really predict if a session will be useful to you in advance just based on the author, title and abstract. Occasionally, some exploratory selection may be a good thing
- technology vs. case studies – learning about solutions or hear about how to apply them in a given domain
Typically, you may want to hedge your bets and use a combination of different selection strategies.
The most popular session may fill up quickly, so you may need to register in advance for them. So, go through the session catalog and sign up for the ones that interest you the most as soon as possible to ensure access. Often, you can still get in to full sessions, but it’s not guaranteed, and especially for the “hands-on lab” session types availability may really be limited.
For many presentations there’s some kind of sales message to be read somewhere between the lines. The presenter may want to sell you products, services and/or adopt a particular view of the world. Depending on your situation, and how pronounced the selling viewpoint is, the session may still include information that you can use. For example, if you’re shopping for products or services this can be a good thing. But, if you’re planning on building something yourself, you may not be getting much out of the session.
Some presentations can also be viewed later, even if you couldn’t attend, for example due to another concurrently running session. Cool, flashy presentations can be very entertaining, but also problematic in this respect, and you may not get too much out of them just by looking at the slides later on. And even if you attended the session you might not want to skip taking your own notes.
I’ve felt that my time is best spent on getting short intros on new technologies, trend spotting, case studies and talking to people. For actually learning to master a particular technology, you can always take some time to read and try it out in practice. Or even go on a course, read a book or get hold of a colleague that has experience in the subject. So, I don’t usually attend multiple sessions on a given subject. Focusing on a single larger theme from different perspectives can be very useful, though.
But, conferences aren’t just for one-way communication, they’re great for dialogue: connecting with other attendees and discussing about technologies, in what kind of context and problem to use them, how to use them etc. Think, question and discuss. You may be the world’s best expert on matters relating to your exact problem setting!
In my opinion, one of the greatest things about conferences is, that they provide an excellent opportunity to think different and step outside of your routine work. They give you a chance to look at the problems and solutions you’re facing in your day-to-day work in an entirely different setting. Ideally this allows you to really think outside-the-box, which can have a great impact on your work.
Other important practicalities include gadget power usage. If you’re using a laptop, mobile phone or a tablet for taking notes or other things during sessions, make sure you know your battery capacity. Learning how to save battery power can also be useful, as well as locating places where you can recharge your battery. Don’t miss any recharging opportunity.
The JavaOne and the other Oracle conferences taking place at the same time are huge. The streets sometimes get really crowded with conference attendees. I don’t know how accurate this is, but an Oracle president said there were in the order of 60,000 attendees. This means that getting accommodation that meets your quality, cost and location expectations can be difficult, unless you book early. Two months in advance may not be early enough, make arrangements as early as you can. JavaOne 2012 is over today, so if you want to attend next year, start making your preparations 🙂
Getting started with Oracle WLS 12c
2012-09-16
Posted by on I’ve been developing software for different incarnations of the Oracle Application Server in the past (Oracle OC4J 10g R3 and BEA/Oracle WebLogic (v8.1 and v10.3), but it’s been quite a while since my last encounter with the server. During recent years I’ve been involved mostly with other application servers. Despite occasional hiccups, I had been reasonably satisfied with the server, so I was curious to give the latest version of Oracle’s application server a quick test drive. Having a background in software development, I thought I’d approach this first from a developer perspective, checking out how the application development workflow (including code, build, deploy) feels like with the latest version. Obviously, much of the workflow is actually about generic Java EE development (as opposed to app server specific development) as long as you adhere to the standard, but I’ve felt that trying simulate the development workflow gives you a more complete view of what it’s like to work with a particular app server product. Instead of coding a Java EE app myself or porting an existing one, I thought I’d work with sample applications made available by others.
Installing Oracle WebLogic Server
There are several options for getting WebLogic running for development purposes: a) use Oracle JDeveloper and the embedded WLS server b) use the IDE of your choice and the WLS zip distribution c) use the IDE of your choice and the full WLS. Since the focus of my test was to check out WLS for development purposes (but not JDeveloper), I chose option b.
So, I downloaded the following WLS distributions from Oracle:
- WLS Zip Distribution for Oracle WebLogic Server 12.1.1.0
- WLS Supplemental Zip Distribution for Oracle WebLogic Server 12.1.1.0
The first distribution includes the application server itself and weighs approximately 184 MB. The second one includes sample code.
The installation process is pretty well documented in the WLS package README files but there were a few small gotchas, though. The supplemental zip distribution also includes a nice set of documentation, including architecture description, for the samples in found in $MW_HOME/wlserver/samples/server/index.html.
Here’s the installation procedure I used:
(The text below has been written for Mac OS X and assumes WLS has been installed in $HOME/opt/wls1211_dev but it should be trivial to adapt the instructions for other configurations.)
# 1. extract WLS (see README.txt in WLS package) mkdir wls1211_dev cd wls1211_dev unzip ~/Downloads/wls1211_dev.zip # 2. set environment variables export JAVA_HOME=/System/Library/Frameworks/JavaVM.framework/Versions/1.6/Home export USER_MEM_ARGS="-Xmx1024m -XX:MaxPermSize=256m" export MW_HOME=$HOME/opt/wls1211_dev # 3. run the installation script . ./configure.sh
We’ll skip WLS domain creation for now, because the samples setup script creates one for us, and start up and move straight to installing the WLS supplemental distribution.
# wls supplement (see README_SUPP.txt) unzip ~/Downloads/wls1211_dev_supplemental.zip # 64-bit environments . wlsenv.properties # create WLS domain, server, database etc. ./run_samples.sh
This script sets up a WLS domain, WLS server and a database server for the sample application, configures datasources etc. When I tried to start up the sample domain at this point, I received an error about JRE not being found, so decided to reset the environment variables by firing up a new shell session and then set the WLS environment variables again:
# start up WLS sample domain export MW_HOME=$HOME/opt/wls1211_dev export JAVA_HOME=/System/Library/Frameworks/JavaVM.framework/Versions/1.6/Home export USER_MEM_ARGS="-Xmx1024m -XX:MaxPermSize=256m" $MW_HOME/wlserver/samples/domains/medrec/startWebLogic.sh
If you have a GUI session with your OS, a web browser should open up with the sample application page.
Sample app #1: MedRec
Oracle provides a WLS supplemental zip distribution aimed at development use. The supplement includes code samples demonstrating various aspects of using different Java EE technologies. It also includes a complete Java EE v5.0 (why not v6.0?) sample application called Avitek Medical Records or MedRec. It claims to “showcase the use of each Java EE component, and illustrates best practice design patterns for component interaction and client development“.
After I got the application server and sample application up and running I wanted to start browsing the application source code and see how to make modification.
You can build and deploy the sample application using the following commands:
# set up the environment for the build export MW_HOME=$HOME/opt/wls1211_dev . $MW_HOME/wlserver/samples/domains/medrec/bin/setDomainEnv.sh cd $WL_HOME/samples/server/medrec # build + deploy ant -Dnotest=true deploy
The Ant command will build and deploy the new application version, if you have the application server up and running. (Environment variables set by the WLS installation scripts appeared to interfere somehow with the ones set by setDomainEnv.sh and I had to start a new shell session to make the build work.)
The sample application includes Eclipse project and classpath files, so you can easily import the application code in Eclipse (e.g. Juno). The application depends on different Java EE and third-party APIs that are bundled with the application, so you’ll end up seeing lots of errors in Eclipse. The easiest way to get the source code imported and classpaths set up correctly is to use the Oracle provided Eclipse distribution (Oracle Enterprise Pack for Eclipse [v12c for Eclipse Juno]). Here’s how to import the code in OEPE and create a WLS 12c runtime configuration:
- create new workspace
- configure WebLogic server runtime
select: window / show view / other
server / servers
new server wizard
select server type: Oracle / Oracle WebLogic Server 12c
and fill in the following:
WebLogic home: $HOME/opt/wls1211_dev/wlserver
Java home: /System/Library/Frameworks/JavaVM.framework/Versions/1.6/Home
Domain directory: $HOME/opt/wls1211_dev/wlserver/samples/domains/medrec - import MedRec source code
file / import: general / existing projects into workspace
select root directory: $HOME/opt/wls1211_dev/wlserver/samples/server/medrec - select all 12 projects
configure target server runtime for each project
select project / properties / server or targeted runtimes and choose “Oracle WebLogic Server 12c”. Uncheck WebLogic 10.3 version. - refresh all projects
At this point your Eclipse project explorer should look like this and you should be able to do a full modify-build-deploy cycle:
Sample app #2: Pet Catalog
The Pet Catalog is a Java EE 6 sample application that demonstrates usage of JavaServer Faces 2.0 and the Java Persistence API. It’s based on a three-tiered architecture on a logical level, but both the presentation and logic tier components are packaged in a single WAR module.
With the first sample app, we were able to skip creating a WLS domain because the installation script created one for us, but now we’ll have to create one. In WLS, the concept of a domain refers to a logically related group of WLS servers and/or server clusters that are managed as a unit. Each domain has an administration server, which is used to configure, manage and monitor other servers and resources in that domain. Additional servers in the domain are called managed servers, which are used for deploying and executing Java EE artifacts. The administration server is meant to be used only for administration, though you can deploy applications to it in development installations.
Creating a WLS domain
# setup WLS environment export JAVA_HOME=/System/Library/Frameworks/JavaVM.framework/Versions/1.6/Home export USER_MEM_ARGS="-Xmx1024m -XX:MaxPermSize=256m" export MW_HOME=$HOME/opt/wls1211_dev . $MW_HOME/wlserver/server/bin/setWLSEnv.sh # create a new WLS domain and start WLS mkdir -p $HOME/wls/dom1 cd $HOME/wls/dom1 $JAVA_HOME/bin/java $JAVA_OPTIONS -Xmx1024m -XX:MaxPermSize=256m weblogic.Server
Building the application
The source code links found on the sample app web pages didn’t seem to be working. The application source code comes bundled with NetBeans 7.2 Java EE, however so you can get the source code from NetBeans by choosing:
File / New Project
choose project: samples / Java EE / Pet Catalog
Java’s “Write once, run anywhere” is a great value proposition, but especially in Java EE space delivering on that proposition has been lacking. Portability issues arose also in this case, when I tried deploying to WLS the Pet Catalog app, that apparently had been tested mostly on GlassFish. The actual issue seemed to be related more with the particular JPA implementation (EclipseLink) than standard JPA, but I think it’s telling evidence of portability issues since this is supposed to be a standard Java EE showcase sample application. Once, I managed to find out what was causing the issue, fixing it was simple. Often application servers have their own, sometimes very unintuitive, ways of reporting issues and troubleshooting is an area where experience in your particular application server product can really make a big difference. Also, often with well-architected applications, it’s the packaging and deployment where portability problems typically arise, instead of the actual code.
In this case I ran into a problem with datasource authentication. To fix the deployment issue I had to modify the persistence unit definition in persistence.xml by commenting out eclipselink.jdbc.user and eclipselink.jdbc.password parameters.
Deploying the application
Create and initialize the database
Pet Catalog uses a MySQL database for persisting data. A database, tables and a user account must be created before deploying the application.
create database petcatalog; GRANT ALL ON petcatalog.* TO 'pet1'@'localhost' IDENTIFIED BY 'pet1'; cat setup/catalog.sql | /usr/local/mysql/bin/mysql -h 127.0.0.1 -P 3406 -u pet1 -f -p petcatalog
Create a Data Source
Once you’ve set up the database, the database connection or datasource needs to be configured in the application server. To do this, log on to WLS console and do the following:
Choose: Services / Data Sources / New / Generic Data Source
Then on “JDBC Data Source Properties” page fill in the following:
- Name: petCatalogDS
- JNDI Name: jdbc/petcatalog
- Database Type: MySQL
- Database Driver: MySQL’s Driver (Type 4) Versions:using com.mysql.jdbc.Driver
And on “Transaction Options” page:
- Supports Global Transactions
- One-Phase Commit
Then “Connection Properties”:
- Database Name: petcatalog
- Host Name: localhost
- Port: 3406
- Database User Name: pet1
- Password: pet1
Test Database Connection
And finally on “Select Targets” page choose the server to deploy to:
- myserver
Deploy WAR
Finally, deploy the application WAR to WLS. The application should run without customizing any deployment parameters.
Conclusions
In my quick test drive I focused mostly on the development workflow related aspects of the WebLogic server (developer distribution), and not operational aspects such as scalability, availability, reliability, operability etc. WLS appears to be a capable, feature rich Java EE application server, as could be expected from a major vendor, but the zip distribution was also relatively light-weight and ran quite well on my laptop.
WLS has very nice server administration capabilities: you can easily view and edit the configuration using command line tools, but a comprehensive web-based administration console is also available that allows you to perform any server administration task. The server configuration is persisted in XML files (e.g. config.xml) that are stored under a single filesystem directory tree, which makes it easy to compare and migrate configuration files. The console just enables administrators to manipulate these configuration files through a web UI. The web console has a much more comprehensive feature set than e.g. the one in Jboss EAP 5. WebLogic also features a command-line scripting environment (WLST) that you can use to create, manage, and monitor WLS domains. Due to XML based configuration and scripting support backup and recovery of server configuration, as well as taking snapshots and rolling back changes should be easy. Deploying the exact same configuration should be simple as well.
It seems odd that the sample application doesn’t showcase all the new features of the latest-and-greatest Java EE specification version that the WebLogic server supports. Also, the basic development mode installation could’ve been made simpler still, similar to some other app servers where you only need to do a simple unzip. Production installation is of course an entirely different story.
SFTP file transfer session activity logging
2012-09-16
Posted by on Old-school bulk file transfers may be considered out of fashion in a SOA world, but we’re not in a perfectly SOA enabled world, yet. So, bulk file transfers still remain widely used and SFTP is the workhorse of bulk file transfers.
Typically you would create an operating system account for each inbound file transfer party you want to enable and configure SSH public key authentication. By default, this would also allow full terminal access for account. What if you don’t want to have that? Well, just define sftpd as the login shell for those users to restrict access:
sftp1:x:502:502::/home/sftp1:/usr/libexec/openssh/sftp-server
Simple enough.
What if you want to monitor SFTP session activity on the system? Depending on SSH and syslogd config, you’ll find session logs in different places, but often on Linux this will be in /var/log/secure. OpenSSH v5.3 logs the following lines for a SFTP session:
Sep 16 15:31:34 localhost sshd[4274]: Postponed publickey for sftp1 from 172.16.221.1 port 56069 ssh2 Sep 16 15:31:34 localhost sshd[4273]: Accepted publickey for sftp1 from 172.16.221.1 port 56069 ssh2 Sep 16 15:31:34 localhost sshd[4273]: pam_unix(sshd:session): session opened for user sftp1 by (uid=0) Sep 16 15:31:34 localhost sshd[4276]: subsystem request for sftp Sep 16 15:31:36 localhost sshd[4276]: Received disconnect from 172.16.221.1: 11: disconnected by user Sep 16 15:31:36 localhost sshd[4273]: pam_unix(sshd:session): session closed for user sftp1
You can see the session start, source IP address and user account info there. This is enough information for basic transfer account activity monitoring, and in some cases it may be enough for accounting and billing as well.
But what if you want to customize logging or hook into SFTP session start and end events to allow some form of custom processing? You can create a wrapper script for sftpd and configure that as the user’s login shell:
#!/bin/sh # # sftpd wrapper script for executing pre/post session actions # # pre session actions and logging here SOURCE_IP=${SSH_CLIENT%% *} MSG_SESSION_START="user $LOGNAME session start from $SOURCE_IP" logger -p local5.notice -t sftpd-wrapper -i "$MSG_SESSION_START" # start actual SFTP session /usr/libexec/openssh/sftp-server # add post session actions here
You could replace the syslogd based logging command above with your custom logging. The logging command above would log session start events using local5 log facility with notice priority (see man rsyslog.conf(5)). Log entries using a local5 log facility can be directed to a custom log file using the following syslogd configuration in /etc/rsyslog.conf:
local5.* /var/log/sftpd.log
So, now you can customize the messages and perform pre/post session actions. If you need to do more advanced reporting for this data or allow non-technical users to do ad-hoc reporting, you might want to put the session data in a RDBMS. You could either add the data to the database directly in the wrapper script or set up logrotate to rotate your custom log files and configure a postrotate/prerotate script that would parse the log file and add entries to the database in batches.
What if you need to know what exactly goes on inside the file transfer sessions, like which files are being downloaded or uploaded etc.? OpenSSH sftpd doesn’t log this info by default, with the default facility and log level being auth.error. You can change this either globally in sshd_config or per user by changing the sftpd-wrappper script above like this:
/usr/libexec/openssh/sftp-server -f local5 -l info
This would direct sftpd log entries issued with local5 facility and info priority for selected users only. So, now your sftpd.log would look like the following:
Sep 16 16:07:19 localhost sftpd-wrapper[4471]: user sftp1 session start from 172.16.221.1 Sep 16 16:07:19 localhost sftp-server[4472]: session opened for local user sftp1 from [172.16.221.1] Sep 16 16:07:40 localhost sftp-server[4472]: opendir "/home/sftp1" Sep 16 16:07:40 localhost sftp-server[4472]: closedir "/home/sftp1" Sep 16 16:07:46 localhost sftp-server[4472]: open "/home/sftp1/transactions.xml" flags WRITE,CREATE,TRUNCATE mode 0644 Sep 16 16:07:51 localhost sftp-server[4472]: close "/home/sftp1/transactions.xml" bytes read 0 written 192062308 Sep 16 16:07:54 localhost sftp-server[4472]: session closed for local user sftp1 from [172.16.221.1]
The log entry format requires some effort to process programmatically, but its still manageable. You can identify SFTP session operations, such as individual file transfers from the log. The log could then be processed using a logrotate postrotate/prerotate script that could e.g. add the data in a database for generating input data for accounting or billing.
Software versions: RHEL 6.3 and OpenSSH 5.3.
System call tracing is your friend
2012-08-26
Posted by on After downloading and installing Java SE 7 update 6 I tried running “java -version” to verify that the JDK was installed properly. To my surprise, the command reported the previous version instead of update 6. I then tried troubleshooting the problem using:
pkgutil --verbose --files com.oracle.jdk7u6 installer -dumplog -verbose -pkg '/Volumes/JDK 7 Update 06/JDK 7 Update 06.pkg' -target /
but with no effect. Then, browsing through the previous Java 7 installation directory parent directories I noticed that with update 6 the installation path was actually
/Library/Java/JavaVirtualMachines/jdk1.7.0_06.jdk
instead of
/Library/Java/JavaVirtualMachines/1.7.0.jdk
as with the previous Java 7 update releases, so I was using the old absolute path in my “java -version” command.
Now, on Linux one of my first troubleshooting methods would’ve been to use the strace command, but for some reason this doesn’t come instinctively for me on Mac OS X. On the Mac the equivalent command is called dtruss and it would’ve revealed the new installation path immediately, as strace would’ve:
dtruss 'installer -dumplog -verbose -pkg /Volumes/JDK\ 7\ Update\ 06/JDK\ 7\ Update\ 06.pkg -target /' ... kevent(0x3, 0x153C67788, 0x1) = 1 0 audit_session_self(0x7FB1EB9640E0, 0x7FB1EBBEB150, 0x78) = 6659 0 kevent(0x3, 0x153C67788, 0x1) = 1 0 lstat64("/Library/Java/JavaVirtualMachines/jdk1.7.0_06.jdk", 0x153C65860, 0x1) = -1 Err#2 stat64("/Library/Java/JavaVirtualMachines/jdk1.7.0_06.jdk", 0x153C668B8, 0x0) = -1 Err#2 getattrlist("/", 0x153C665A0, 0x153C66190) = 0 0 getattrlist("/Library/Internet Plug-Ins/JavaAppletPlugin.plugin", 0x153C665A0, 0x153C66190) = 0 0 ...
So, when troubleshooting OS level problems, system call tracing is always your friend, irrespective of the operating system. This is a good case in point.
Asynchronous event-driven servers with Apache MINA
2012-08-06
Posted by on A while ago we had to do performance testing for a web application that depends on an external network service that couldn’t be tested in-place with high data volumes. We wanted to include the network protocol communication with the external service in the test (i.e. work on “system integration testing” level) and there was no existing mock server, so I decided to spend a few hours evaluating if we could implement one ourselves. Since the mock server can obviously become a bottleneck I had to make sure it was implemented efficiently (IO, threading, session and memory usage etc.) enough.
Implementing a server that leverages asynchronous IO with Java NIO can be a tedious task mainly because incoming and outgoing protocol messages will get fragmented and you need to handle things like defragmentation and state management. The network protocol handling code can be difficult to get correctly and if you don’t design your abstractions carefully, it will get intertwined with application level logic resulting in unmaintainable code.
There are several prominent asynchronous event-driven network communication frameworks for Java that you can use for implementing protocol servers and clients. Among the better known are Netty, Apache MINA and GlassFish Grizzly. These frameworks allow implementing scalable, high-performance and extensible network applications. The application developer is freed of much of the protocol message handling, state, session and thread management details. All of the frameworks listed above are widely used and mature, but I had to pick one and decided to give Apache MINA 2.0 a try.
Apache MINA defines the concept of a service, which in abstract terms represents a network accessible endpoint that a consumer can communicate with to request it to perform some well-defined task. An IoService class instance acts as an entry point to a service, which is implemented as a connector on the client-side and as an acceptor on the server-side. An acceptor is used when implementing servers and they act as communication endpoints to a service accepting new sessions and mediating network traffic between consumers and the server side components responsible for actual message processing. The application developer picks an appropriate acceptor type (e.g. NioSocketAcceptor for non-blocking TCP/IP) based on his requirements. Acceptors are responsible for network communication, connection and thread management etc. but you they delegate responsibilities to other interfaces that you’re free to customize and configure. As a minimum you’ll need to configure an IoHandler interface implementation that takes care of handling different I/O events, for servers most notably receiving messages, but you can also choose to handle session and exception related events. An acceptor can also have multiple filters that can do I/O event pre and post processing. You’ll typically need to configure at least a protocol message encoder and decoder (ProtocolCodecFilter), that will take care of message serialization and deserialization.
I found that Apache MINA really did fulfill its promise and implementing a high-performance, scalable and extensible network server was easy using it. MINA also helps very cleanly separate network communication and application level message processing logic. Supporting multiple different protocols in in the same server is well supported in MINA. As a downside the documentation for v2.0 is a bit lacking, but fortunately there are quite a few code samples that you can check out.
Using Oracle SQLDeveloper with MySQL
2012-08-03
Posted by on Oracle SQLDeveloper is a tool I’ve found very valuable in projects where I’m using the Oracle Database. Normally I like using command line tools, but many tasks such as browsing large result sets or data in fat tables, browsing database schema metadata etc. are much faster with SQLDeveloper. SQLDeveloper supports other relational databases and since I’m currently working on a project involving MySQL, I thought I’d give SQLDeveloper (v3.1.07) a little test with MySQL (v5.5).
You can install extensions in SQLDeveloper in a similar fashion as in Eclipse and there’s a MySQL JDBC driver available (Third Party SQLDeveloper extension). For some reason the extension failed to install properly on my Mac: everything looked to be going fine but the installation failed silently for some reason. You can configure JDBC drivers manually in SQLDeveloper, however, so I downloaded the MySQL driver and configured it (preferences / database / third party JDBC drivers). After that, a new tab called “MySQL” appears when creating a new database connection. Here you can specify DB product specific connection parameters.
I was able to successfully connect to my MySQL database but when trying to browse table data on a table containing 5+ M rows, the operation failed with the following error:
Task Error
Java heap space
I don’t remember running into this problem with SQLDeveloper when connecting to Oracle DB. As a workaround I modified the Java VM heap size argument that SQLDeveloper passes to Java VM at launch (sqldeveloper.conf configuration file).
I also, wanted to test if SQLDeveloper would run with my newly installed Java 7 but that turned out to be a bit more difficult. On Mac OS X, changing the Java path in the SQLDeveloper default configuration files had no effect, as this parameter was overridden in a platform specific configuration file that had to be changed (sqldeveloper-Darwin.conf), in order to use an alternate Java VM. The correct configuration file to change was revealed after starting up SQLDeveloper with –verbose flag from the command line:
sqldeveloper.sh --verbose
SQLDeveloper can help in a number of ways when you’re working with Oracle DB including: provide wizards for creating and editing table definitions, import and export data and allow viewing and changing many aspects of database metadata. The SQL Worksheet can help you when writing SQL statements with the autocompletion feature. SQLDeveloper is a great tool to use with Oracle DB, but you should note that some of its features aren’t available in SQLDeveloper for other database products.
Deploying a dependency manager
2012-08-02
Posted by on Older codebases are common to include dependencies required by the application, such as code libraries, picture and sound files and other artifacts in the source code repository itself. This typically leads to the repository getting bloated. As an example, in a project where I was working recently the repository size on checkout was 450 MB. Checking out such a repository takes a lot of time, wastes disk space on each developer sandbox and makes implementing continuos integration more difficult.
Here’s where dependency management technologies such as Apache Maven and Ivy (with an artifact repository) can be a great help. Deploying such a solution in a greenfield project is simple, but when you have an existing codebase that’s used in production things get more complicated.
Suppose you have a set of libraries that the codebase currently uses and you want to manage them using a dependency manager and artifact repository. You can choose to
- a) declare and publish your own artifacts based on artifacts in the existing codebase
- b) use artifacts published in public repositories
- c) use a hybrid approach
With option a you gain maximum control over your artifacts (and thus what exactly gets included) while with b you hope to be doing less work managing artifacts on the long term. Using publicly available artifacts requires you to determine and categorize library dependencies, which can require a lot of detective work. Also, you need to trust the persons providing artifacts to be doing a good job in declaring transitive dependencies. I haven’t found good automated solutions for determining dependencies, so I’ve used the following approach:
- Determine compile-time dependencies
Check import statements from source code. For each import find the corresponding library in a public artifact repository and add the dependency to compile-time set.
Delete jars one-by-one, compile. If build fails, add required dependencies.
Rebuild and iterate. - Determine runtime dependencies
build package, deploy and test. If any dependencies are missing, add them to the runtime set.
Iterate. - Cross check original set of jar files (optional)
Remove any files that aren’t present in the original runtime set using explicit exclusions.
This is a rather frustrating and dull method. Also, in practice, in step 3 some artifacts usually bring in dependencies that weren’t packaged with the application originally and which you might not want to include in the dependency set. You can solve this in two ways: a) include transitive dependencies by default and exclude the unnecessary ones or b) exclude transitive dependencies by default and include explicitly the required ones. With option b you gain control but you’re not using the dependency manager up to its full potential. Option a on the other hand may require that you let go on your purism and accept the fact that the dependency set will include something, that wasn’t included as a dependency originally.
Oracle Enterprise Linux now free (of charge)
2012-08-01
Posted by on Linux application software developers often face a choice between two compatible but different OS variants: Red Hat Enterprise Linux (RHEL) and CentOS. Using RHEL can sometimes be problematic for developers because typically some sort of centralized subscription management is required for enabling software updates, and depending on the organization you can get stuck from hours to days. The required bureaucracy can be a really frustrating experience for software developers looking to install just a basic virtualized RHEL guest OS instance for development or QA purposes: 5 minutes and you’re done – if it just wasn’t for the subscription management part! CentOS on the other hand can be freely downloaded and used but the downside is that traditionally publishing updates has dragged behind. Depending on the project, this may not be a big problem for QA and development purposes, but for internet facing production platforms you’d like the security updates to get installed as soon as they get released.
Oracle Enterprise Linux is an enterprise Linux distribution similar to CentOS in that it’s binary compatible with RHEL. It’s also been made freely (as in beer) available recently. The big upside for the app dev use case above is that Oracle promises to publish updates faster than CentOS has done. For operations personnel the benefit is that you can also get paid support for the OS from Oracle as well as some interesting features, such as zero-downtime kernel updates with Ksplice.
Being a bit curious, I downloaded Oracle Linux installation image (Oracle Linux Release 6 Update 3 for x86_64 [64 Bit], 3.5 GB) from Oracle and installed it as a virtualized guest OS instance on my laptop. The installation process worked as expected with RHEL and CentOS, except for the different branding, logos etc., of course. Software updates also installed without problems after initial installation.
So far I’ve dismissed Oracle Linux from consideration as a niche distribution and had some doubts about its continuity, but it does look like a solid OS and it has been around for a while now, so it could be a viable option to consider when choosing an enterprise Linux platform.
For more information see:
- Oracle Linux: A better alternative to CentOS
- Oracle Linux Technical information (incl. Oracle value-added features)
- LWN.net article on Oracle Linux
Java 7 on Mac OS X – finally!
2012-08-01
Posted by on Apple doesn’t exactly have a history of timely Java releases for Mac OS X so, I didn’t expect Java 7 to be available soon after its GA release, but I was very disappointed to read instead Apple’s announcement in october 2010 stating they will not be supporting Java 7 on Mac OS X. I also was quite sceptic in november, when there was a surprise announcement from Apple and Oracle saying the two companies will be working together to port OpenJDK to Mac OS X. Java 7 was published in july 2011 but patience was required from Mac OS X Java developers still.
When the OpenJDK Java 7 preview packages were finally made available in 2012 they didn’t run on Mac OS X Snow Leopard, so I had to build the JDK from the sources. That was fairly simple but rather time-consuming, and the build process practically rendered my laptop unusable since it used up a lot of CPU, IO and memory resources. Operating system reinstallation is always a huge load of work, with all the backing up, finding a suitable time slot and other arrangements, so it was only last week when I finally managed to find the time for OS X Lion upgrade, but now I’m able to use the Oracle provided JDK 7 installation packages, which makes JDK upgrades a lot easier. So, a year after Java 7 release I’m finally able to run it on my laptop! And one nice thing about Oracle picking up Java on Mac OS X is that they’ve promised to release Java 7 updates simultaneously for Mac, Windows, Linux and Solaris (Henrik on Java). I hope the Mac OS X port code base is well integrated with the rest of the tree and that also future Java major releases like 8 ja 9 will happen in a timely fashion.