"Argument list too long": Beyond Arguments and Limitations

作者:佚名 上传时间:2019-05-31 版权申诉

Four approaches to getting around argument length limitations on the command line.

At some point during your career as a Linux user, you may have come across the following error:

[user@localhost directory]$ mv * ../directory2
bash: /bin/mv: Argument list too long

The "Argument list too long" error, which occurs anytime a user feeds too many arguments to a single command, leaves the user to fend for oneself, since all regular system commands (ls *, cp *, rm *, etc...) are subject to the same limitation. This article will focus on identifying four different workaround solutions to this problem, each method using varying degrees of complexity to solve different potential problems. The solutions are presented below in order of simplicity, following the logical principle of Occam's Razor: If you have two equally likely solutions to a problem, pick the simplest.

Method #1: Manually split the command line arguments into smaller bunches.

Example 1

[user@localhost directory]$ mv [a-l]* ../directory2
[user@localhost directory]$ mv [m-z]* ../directory2

This method is the most basic of the four: it simply involves resubmitting the original command with fewer arguments, in the hope that this will solve the problem. Although this method may work as a quick fix, it is far from being the ideal solution. It works best if you have a list of files whose names are evenly distributed across the alphabet. This allows you to establish consistent divisions, making the chore slightly easier to complete. However, this method is a poor choice for handling very large quantities of files, since it involves resubmitting many commands and a good deal of guesswork.

Method #2: Use the find command.

Example 2

[user@localhost directory]$ find $directory -type f -name '*' -exec mv
{} $directory2/. \;

Method #2 involves filtering the list of files through the find command, instructing it to properly handle each file based on a specified set of command-line parameters. Due to the built-in flexibility of the find command, this workaround is easy to use, successful and quite popular. It allows you to selectively work with subsets of files based on their name patterns, date stamps, permissions and even inode numbers. In addition, and perhaps most importantly, you can complete the entire task with a single command.

The main drawback to this method is the length of time required to complete the process. Unlike Method #1, where groups of files get processed as a unit, this procedure actually inspects the individual properties of each file before performing the designated operation. The overhead involved can be quite significant, and moving lots of files individually may take a long time.

Method #3: Create a function. *

Example 3a

function large_mv ()
{       while read line1; do
                mv directory/$line1 ../directory2
        done
}
ls -1 directory/ | large_mv

Although writing a shell function does involve a certain level of complexity, I find that this method allows for a greater degree of flexibility and control than either Method #1 or #2. The short function given in Example 3a simply mimics the functionality of the find command given in Example 2: it deals with each file individually, processing them one by one. However, by writing a function you also gain the ability to perform an unlimited number of actions per file still using a single command:

Example 3b

function larger_mv ()
{       while read line1; do
                md5sum directory/$line1 >>  ~/md5sums
                ls -l directory/$line1 >> ~/backup_list
                mv directory/$line1 ../directory2
        done
}
ls -1 directory/ | larger_mv

Example 3b demonstrates how you easily can get an md5sum and a backup listing of each file before moving it.

Unfortunately, since this method also requires that each file be dealt with individually, it will involve a delay similar to that of Method #2. From experience I have found that Method #2 is a little faster than the function given in Example 3a, so Method #3 should be used only in cases where the extra functionality is required.

Method #4: Recompile the Linux kernel. **

This last method requires a word of caution, as it is by far the most aggressive solution to the problem. It is presented here for the sake of thoroughness, since it is a valid method of getting around the problem. However, please be advised that due to the advanced nature of the solution, only experienced Linux users should attempt this hack. In addition, make sure to thoroughly test the final result in your environment before implementing it permanently.

One of the advantages of using an open-source kernel is that you are able to examine exactly what it is configured to do and modify its parameters to suit the individual needs of your system. Method #4 involves manually increasing the number of pages that are allocated within the kernel for command-line arguments. If you look at the include/linux/binfmts.h file, you will find the following near the top:

/*
 * MAX_ARG_PAGES defines the number of pages allocated for   arguments
 * and envelope for the new program. 32 should suffice, this gives
 * a maximum env+arg of 128kB w/4KB pages!
 */
#define MAX_ARG_PAGES 32

In order to increase the amount of memory dedicated to the command-line arguments, you simply need to provide the MAX_ARG_PAGES value with a higher number. Once this edit is saved, simply recompile, install and reboot into the new kernel as you would do normally.

On my own test system I managed to solve all my problems by raising this value to 64. After extensive testing, I have not experienced a single problem since the switch. This is entirely expected since even with MAX_ARG_PAGES set to 64, the longest possible command line I could produce would only occupy 256KB of system memory--not very much by today's system hardware standards.

The advantages of Method #4 are clear. You are now able to simply run the command as you would normally, and it completes successfully. The disadvantages are equally clear. If you raise the amount of memory available to the command line beyond the amount of available system memory, you can create a D.O.S. attack on your own system and cause it to crash. On multiuser systems in particular, even a small increase can have a significant impact because every user is then allocated the additional memory. Therefore always test extensively in your own environment, as this is the safest way to determine if Method #4 is a viable option for you.

Conclusion

While writing this article, I came across many explanations for the "Argument list too long" error. Since the error message starts with "bash:", many people placed the blame on the bash shell. Similarly, seeing the application name included in the error caused a few people to blame the application itself. Instead, as I hope to have conclusively demonstrated in Method #4, the kernel itself is to "blame" for the limitation. In spite of the enthusiastic endorsement given by the original binfmts.h author, many of us have since found that 128KB of dedicated memory for the command line is simply not enough. Hopefully, by using one of the methods above, we can all forget about this one and get back to work.

Notes:

* All functions were written using the bash shell.

** The material presented in Method #4 was gathered from a discussion on the linux-kernel mailing list in March 2000. See the "Argument List too Long" thread in the linux-kernel archives for the full discussion.




本文转自 vfast_chenxy 51CTO博客,原文链接:http://blog.51cto.com/chenxy/794174,如需转载请自行联系原作者

免责申明:文章和图片全部来源于公开网络,如有侵权,请通知删除 server@dude6.com

用户评论
相关推荐
"Argument list too long": Beyond Arguments and Limitations
Four approaches to getting around argument length limitations on the command line.
argument list too long错误解决
xargs是给命令传递参数的一个过滤器,也是组合多个命令的一个工具。它把一个数据流分割为一些足够小的块,以方便过滤器和命令进行处理。通常情况下,xargs从管道或者stdin中读取数
处理aix The parameter list is too long
NULL 博文链接:https://jlins.iteye.com/blog/839575
RAR
56KB
2020-12-31 17:47
Argument list too long错误 及/dev/null
1。   1>   /dev/null   表示将命令的标准输出重定向到              /dev/null 2>/dev/null
Argument list too long”错误解决方法汇总
这篇文章是回复前几天在论坛一个朋友提出的问题,今天有空,整理了一下,发布出来,供大家参考! 当Linux下试图传递太多参数给一个系统命令(ls *; cp *; rm *; cat
linux xargs 命令及argument list too long 的处理方法
xargs是给命令传递参数的一个过滤器,也是组合多个命令的一个工具。它把一个数据流分割为一些足够小的块,以方便过滤器和命令进行处理。通常情况下,xargs从管道或者stdin中读取数据,但是
linux xargs 命令及argument list too long 的处理方法
xargs是给命令传递参数的一个过滤器,也是组合多个命令的一个工具。它把一个数据流分割为一些足够小的块,以方便过滤器和命令进行处理。通常情况下,xargs从管道或者stdin中读取数据,但是
运维实战案例之“Argument list too long”错误与解决方法
作为一名运维人员来说,这个错误并不陌生,在执行rm、cp、mv等命令时,如果要操作的文件数很多,可能会使用通配符批量处理大量文件,这时就可能会出现“Argument list t
在Shell中使用grep命令时遇到“grep: Argument list too long”错误
这个错误是由于命令行参数列表过长,超出了系统的限制导致的。当你在Shell中使用grep命令搜索一个大型目录时,可能会涉及到大量的文件,导致命令行参数列表变得非常长。这个问题可以通过使用find命令
Not applicable
Shell
2023-11-15 16:41
在Unix中使用grep命令时遇到 'grep: Argument list too long' 错误
这个错误是由于命令行参数过多导致的。Unix系统对命令行的长度有限制,当参数过多时就会触发 'Argument list too long' 错误。这通常发生在使用通配符匹配大量文件时,导致扩展后的文
Unix
grep
2023-12-11 21:19