Required ports for JMS using HornetQ (JBoss) to expose on docker container Required ports for JMS using HornetQ (JBoss) to expose on docker container docker docker

Required ports for JMS using HornetQ (JBoss) to expose on docker container


I just found the solution to this problem. I was also going through it.

In your case the problem is in the JBoss configuration. In my case the problem was in Wildfly 8.2.

You are probably using the following parameter in your JBoss:jboss.bind.address = 0.0.0.0

I am using this parameter in my wildfly for him to accept external connections from any IP because my Wildfly is exposed on the Internet.

The problem is that if you do not specify the JBoss/Wildfly settings which IP that HornetQ should report to the JMS clients that are doing remote loockup HornetQ will assume that the IP is what is set in jboss.bind.address. In this case it will take that 0.0.0.0 is not a valid IP. You probably see the following message in its log JBoss:

INFO [org.hornetq.jms.server] (ServerService Thread Pool -- 53) HQ121005: Invalid "host" value "0.0.0.0" detected for "http-connector" connector. Switching to "hostname.your.server". If this new address is incorrect please manually configure the connector to use the proper one.

In this case HornetQ will use the host defined in the machine name. On linux for example it will use what is defined in /etc/hostname.

There is another problem. Because usually the hostname is not a valid host on the Internet can be resolved to an IP via a DNS service.

Then notice what is probably happening to you: Your JBoss server is scheduled to give bind to 0.0.0.0, your HornetQ (embedded in JBoss) is trying to take this IP and how it is not a valid IP he is taking the hostname of your server. When your remote JMS client (that is outside of your local network) makes a loockup on your JBoss the HornetQ reports to the client that he must seek the HornetQ resources on the host YOUR_HOSTNAME_LOCAL_SERVER but when it tries to resolve this name through DNS he can not then the following failure occurs:

java.nio.channels.UnresolvedAddressException at sun.nio.ch.Net.checkAddress(Net.java:123) at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:621) at io.netty.channel.socket.nio.NioSocketChannel.doConnect(NioSocketChannel.java:176) at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.connect(AbstractNioChannel.java:169) at io.netty.channel.DefaultChannelPipeline$HeadHandler.connect(DefaultChannelPipeline.java:1008) at io.netty.channel.DefaultChannelHandlerContext.invokeConnect(DefaultChannelHandlerContext.java:495) at io.netty.channel.DefaultChannelHandlerContext.connect(DefaultChannelHandlerContext.java:480) at io.netty.channel.ChannelOutboundHandlerAdapter.connect(ChannelOutboundHandlerAdapter.java:47) at io.netty.channel.CombinedChannelDuplexHandler.connect(CombinedChannelDuplexHandler.java:168) at io.netty.channel.DefaultChannelHandlerContext.invokeConnect(DefaultChannelHandlerContext.java:495) at io.netty.channel.DefaultChannelHandlerContext.connect(DefaultChannelHandlerContext.java:480) at io.netty.channel.ChannelDuplexHandler.connect(ChannelDuplexHandler.java:50) at io.netty.channel.DefaultChannelHandlerContext.invokeConnect(DefaultChannelHandlerContext.java:495) at io.netty.channel.DefaultChannelHandlerContext.connect(DefaultChannelHandlerContext.java:480) at io.netty.channel.DefaultChannelHandlerContext.connect(DefaultChannelHandlerContext.java:465) at io.netty.channel.DefaultChannelPipeline.connect(DefaultChannelPipeline.java:847) at io.netty.channel.AbstractChannel.connect(AbstractChannel.java:199) at io.netty.bootstrap.Bootstrap$2.run(Bootstrap.java:165) at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:354) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:353) at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:101) at java.lang.Thread.run(Thread.java:745)

The solution to the problem is to configure the JBoss which host it should inform for customers who are doing loockup remote.

In my case the setting for the wildfly is as follows. The standalone.xml file must be changed:

<subsystem xmlns="urn:jboss:domain:messaging:2.0">       <hornetq-server>      <security-enabled>true</security-enabled>      <journal-file-size>102400</journal-file-size>      <connectors>         <http-connector name="http-connector" socket-binding="http-remote-jms">            <param key="http-upgrade-endpoint" value="http-acceptor"/>         </http-connector>      </connectors>                ...       </hornetq-server> </subsystem>

AND

<socket-binding-group name="standard-sockets" default-interface="public" port-offset="${jboss.socket.binding.port-offset:0}">...   <outbound-socket-binding name="http-remote-jms">        <remote-destination host="YOUR_REAL_HOSTNAME" port="${jboss.http.port:8080}"/>     </outbound-socket-binding>   </socket-binding-group>

Note that I'm not using https because I could not do Wildfly work with https for JMS.