<p:inputText>
和<p:editor>
时键入Unicode输入内容,例如中文,并且可以在托管的bean方法中正确获取该输入内容。然而,当我升级到PrimeFaces v3.1.1版本后,所有这些字符都变成了乱码或问号。只有拉丁字母输入正常,变形的是中文,阿拉伯文,希伯来文,西里尔文等字符。
这是为什么,并且我该如何解决?
<p:inputText>
和<p:editor>
时键入Unicode输入内容,例如中文,并且可以在托管的bean方法中正确获取该输入内容。通常情况下,JSF/Facelets在创建/恢复视图时默认将请求参数字符编码设置为UTF-8。但是,如果在创建/恢复视图之前请求了任何请求参数,则设置正确的字符编码太晚了。请求参数只会被解析一次。
在升级到2.x后,PrimeFaces 3.x中失败是由于PrimeFaces的PrimePartialViewContext
中新的isAjaxRequest()
覆盖检查请求参数所致:
@Override
public boolean isAjaxRequest() {
return getWrapped().isAjaxRequest()
|| FacesContext.getCurrentInstance().getExternalContext().getRequestParameterMap().containsKey("javax.faces.partial.ajax");
}
isAjaxRequest()
(Mojarra/MyFaces中的一个函数,与上面PrimeFaces代码通过getWrapped()
获得)检查请求头如下,这不会影响请求参数编码,因为当获取请求头时,请求参数不会被解析: if (ajaxRequest == null) {
ajaxRequest = "partial/ajax".equals(ctx.
getExternalContext().getRequestHeaderMap().get("Faces-Request"));
}
isAjaxRequest()
method may be called by any phase listener, system event listener, or some application factory before the view is created or restored. Therefore, when using PrimeFaces 3.x, request parameters will be parsed before the proper character encoding is set, resulting in the server's default encoding (usually ISO-8859-1) being used. This can cause problems.
There are several ways to fix this:
Use a servlet filter which sets ServletRequest#setCharacterEncoding()
with UTF-8. Setting the response encoding by ServletResponse#setCharacterEncoding()
is by the way unnecessary as it won't be affected by this issue.
@WebFilter("/*")
public class CharacterEncodingFilter implements Filter {
@Override
public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) throws ServletException, IOException {
request.setCharacterEncoding("UTF-8");
chain.doFilter(request, response);
}
// ...
}
You only need to take into account that HttpServletRequest#setCharacterEncoding()
only sets the encoding for POST request parameters, not for GET request parameters. For GET request parameters you'd still need to configure it at server level.
If you happen to use JSF utility library OmniFaces, such a filter is already provided out the box, the CharacterEncodingFilter
. Just install it as below in web.xml
as first filter entry:
<filter>
<filter-name>characterEncodingFilter</filter-name>
<filter-class>org.omnifaces.filter.CharacterEncodingFilter</filter-class>
</filter>
<filter-mapping>
<filter-name>characterEncodingFilter</filter-name>
<url-pattern>/*</url-pattern>
</filter-mapping>
Reconfigure the server to use UTF-8 instead of ISO-8859-1 as default encoding. In case of Glassfish, that would be a matter of adding the following entry to <glassfish-web-app>
of the /WEB-INF/glassfish-web.xml
file:
<parameter-encoding default-charset="UTF-8" />
Tomcat doesn't support it. It has the URIEncoding
attribute in <Context>
entry, but this applies to GET requests only, not to POST requests.
Report it as a bug to PrimeFaces. Is there really any legitimate reason to check the HTTP request being an ajax request by checking a request parameter instead of a request header like as you would do for standard JSF and for example jQuery? The PrimeFaces' core.js
JavaScript is doing that. It would be better if it has set it as a request header of XMLHttpRequest
.
也许在调查此问题时,您会在互联网上偶然发现以下“解决方案”。但这些解决方案在此特定情况下永远不会起作用。解释如下。
Setting XML prolog:
<?xml version='1.0' encoding='UTF-8' ?>
This only tells the XML parser to use UTF-8 to decode the XML source before building the XML tree around it. The XML parser actually being used by Facelts is SAX during JSF view build time. This part has completely nothing to do with HTTP request/response encoding.
Setting HTML meta tag:
<meta http-equiv="Content-Type" content="text/html; charset=utf-8"/>
The HTML meta tag is ignored when the page is served over HTTP via a http(s)://
URI. It's only been used when the page is by the client saved as a HTML file on local disk system and then reopened by a file://
URI in browser.
Setting HTML form accept charset attribute:
<h:form accept-charset="UTF-8">
Modern browsers ignore this. This has only effect in Microsoft Internet Explorer browser. Even then it is doing it wrongly. Never use it. All real webbrowsers will instead use the charset attribute specified in the Content-Type
header of the response. Even MSIE will do it the right way as long as you do not specify the accept-charset
attribute.
Setting JVM argument:
-Dfile.encoding=UTF-8
This is only used by the Oracle(!) JVM to read and parse the Java source files.
p:commandButton
。您的调查证明了这一点。这是一个严重的错误,因为每次单击保存按钮时,错误编码的字符都会重复出现。 - Matt Handyfilter
,但也没成功。目前我的应用程序中有两个过滤器:PrimeFaces的Fileupload过滤器和我自己的过滤器。我在自己的过滤器中添加了HttpServletRequest#setCharacterEncoding()
。如果PrimeFaces的过滤器在另一个过滤器之前被调用,会引起问题吗? - Mr.J4mesSystem.out.println()
来打印提交的数据有关。stdout(即System.out
将写入的位置)是否也配置为使用UTF-8?在Eclipse中,您可以在Window> Preferences> General> Workspace> Text file encoding中进行设置。请注意,同时使用过滤器和服务器配置并不是必需的。其中一个就足够了。 - BalusCuseUnicode=yes&characterEncoding=UTF-8
部分。我已经使用GlassFish管理控制台创建了我的JDBC连接池和JDBC资源。您能告诉我如何将上述两个属性应用到我的JDBC连接中吗? - Mr.J4mes