[02:02:24] <kt> Atomic increment-- $inc -- can anyone point me to some documentation or examples where this kind of operation supports a floor (i.e., zero)?
[02:03:14] <cheeser> can't you just filter out documents that are already 0?
[02:44:50] <Zeeraw> Hey, guys I have an issue with DataFileSync mmap drop taking forever to complete
[02:45:18] <Zeeraw> When it runs it locks locks reads and write and ruins performance
[02:45:50] <Zeeraw> Anything I could do to my environment and configuration to reduce the amount of time it takes to do these dumps?
[06:00:24] <johnnode> how do we use $inc for multiple fields in the same db query? e.g: db.users.update( {_id:'ab'}, { $inc:{field1:1} , $inc:{field2:1} } ) -> I tested but only applied for the "field2", not "field1". Thanks for help.
[06:13:43] <Soothsayer> I am using the aggregation framework. I've defined pipeline stages such that I'm now getting an array of Products as results with an [{ _id, title, price }, ..]. How to define the next pipeline stage such that I get the count of products, the minimum price, the maximum price and also the list of product ids?
[06:50:11] <Soothsayer> redsand_: sorry, missed your message
[06:51:32] <Soothsayer> redsand_: how do I define a new column name which holds the values of all product ids?
[07:06:20] <johnnode> how do we use $inc for multiple fields in the same db query? e.g: db.users.update( {_id:'ab'}, { $inc:{field1:1} , $inc:{field2:1} } ) -> I tested but only applied for the "field2", not "field1". Thanks for help.
[07:54:45] <lzakrzewski> is in mongo features to role check ?
[07:56:08] <Kim^J> lzakrzewski: ? Are you asking about permissions on the database?
[08:00:05] <Kim^J> Hm, is there a limit on how many items you can have in an array that you send to the $in operator? I see something about 400k when you use two $in operators, but I'm only using one.
[08:23:53] <gerryvdm_mbp> is there an expected behavior for doing .sort({field:0}) ?
[08:24:45] <kali> it's not specified, as far as i know
[08:36:41] <bodik> content of --dbpath on --configsvr
[08:37:45] <bodik> well what i'm really up to is, i have a private cloud with monodb deployed, i've created a script for starting up replicated sharded cluster
[09:23:56] <Rhaven> Hello everyone, i've got some troubles about migration between shards. After a few search queries on google, it seems like there are some corrupted files that caused this error. http://pastebin.com/ZPeGeMm9
[09:24:02] <joannac> mark____: is that a user on the admin database, or your own database?
[09:29:02] <joannac> mark____: yes, get an account with the right privileges to either 1. create a new user with write privileges, or 2. give an existing user write privs
[09:30:58] <Rhaven> joannac: This migration is the result of the mongo balancer process
[09:32:58] <Rhaven> joannac: But i think that there is something wrong in local.oplog.rs and i don't know how to solve it
[09:37:08] <roflmaus> Does Mongo by default allow anyone store and retrieve data even over a network?
[09:37:48] <mark____> @joannac: can you please help me to do by typing command ,because i dnt understand the links as you providede to mmme
[09:38:24] <roflmaus> Rhaven, and it is very unsecure, right? Requires some configuration to deal with it?
[09:41:38] <Rhaven> roflmaus: sure it is. In my case i use some firewall rules to allow only trusted source
[09:43:03] <Rhaven> roflmaus: But you can also enable user authentication on mongo
[09:53:55] <joannac> Rhaven: do you have a replset? why do you think it's in the oplog?
[09:56:50] <quattr8> joannac: no data is comming in at mms.mongodb.com, it just says "no data" and there's nothing in the logs or any errors
[09:57:05] <quattr8> it just connects to the mongos right? or do i have to install the monitoring agent?
[09:59:10] <joannac> quattr8: you need to install the monitoring agent.
[10:00:32] <Rhaven> joannac: Yes, i have 5 replica set in sharding environment. And i think this is about oplog cause it is mentioned "problem detected during query over local.oplog.rs : { $err: "BSONElement: bad type 49", code: 10320 }"
[10:28:24] <bmcgee> Hey guys, I need a little help figuring out a query. I have a doc that has a timestamp and a value x. I want to grab the last N docs where the total for x in the document set is no more than some parameter f. I want the full document. Is aggregation the way to go?
[10:36:04] <Tiller> Guys, is there a way to do something like: .find({sign: {$in: [1, 2,3]}, chainId: <?>}); to retrieve 3 rows having sign = to 1 2 and 3 AND the same chainId
[10:40:33] <joannac> bmcgee: that doesn't help. are you adding all values of the field "foo" together over all docs? How do you decide which docs need to be added together?
[10:41:11] <Rhaven> joannac: Just to be sure, we are talking about the db.repairDatabase() command? Does it will block all operations across all shards or just on the replicat set? Should i turn off the production website?
[10:41:18] <bmcgee> joannac: start from now, go backwards in time, add foo as you go, if the sum of foo > x then stop
[10:41:34] <bmcgee> joannac: that defines the document set
[10:45:58] <joannac> bmcgee: oh. I don't know if that's doable inside the shell.
[10:46:46] <bmcgee> The alternative is I just create a cursor and stripe through on the server side myself
[10:46:57] <bmcgee> was curious if it could be offloaded
[10:47:54] <mark____> how to globallly i give write permission in mongodb
[10:54:47] <bartzy> Do field names get saved in an index ?
[10:55:28] <bartzy> i.e. if I have 100 million record in a single-field index. The field is called "status". status as ASCII is 6 bytes. Will the index have 6 * 100M bytes now , only for the name? :|
[10:55:44] <bartzy> Or are the savings in short field names are ONLY for the data files themselves, meaning not the index part ?
[11:38:37] <remonvv> kali: DDoS-ing a public source repository. One wonders what the motivation can be there.
[11:40:04] <gerryvdm_mbp> reminding you of spof in your workflow :)
[11:50:08] <kali> remonvv: yeah. i would not do it.
[11:50:30] <remonvv> kali: Amen. The spof argument is a decent one though ;)
[11:50:57] <kali> remonvv: it did work for us, we had to run deployments for hot fix manually this morning
[11:51:42] <kali> i'm not sure antagonizing github users is a very wise move
[11:52:21] <remonvv> To be honest we're not even sure it's DDoS. DDoS is an awefully easy way to get out of the blame game when a big service goes down.
[11:57:19] <Soothsayer> In the aggregation framework, if I apply a $sort before a $group in the pipeline, the sort doesn't seem to work.. is there a priority in which they are executed?
[11:58:18] <kali> Soothsayer: they are executed in the order you write the pipeline
[11:59:02] <kali> Soothsayer: well, that's not technically exact, but with the same semantics
[12:03:27] <Soothsayer> kali: how does $group deal with a sorting that took place in the previous stage of the pipeline?
[12:03:42] <Soothsayer> "Important The output of $group is not ordered."
[12:06:50] <kali> Soothsayer: i think group will maintain the order of its input among the arrays of subdocuments it output, but it will probably break the order of the toplevel document
[12:08:14] <Soothsayer> kali: In my previous pipeline stage, I have a list of Product documents with id, title, finalPrice. In the next stage, I'm sorting the data by 'price'
[12:08:52] <Soothsayer> and then I am doing a $group where I am using an $addToSet to add all the Product id's into a field of the final result.
[12:08:57] <Soothsayer> The order of ids in this field is not the same as the result of the sort.
[12:09:36] <kali> Soothsayer: can you gist two or three documents and your current pipeline so that we can see it and try to tinker with it ?
[12:26:33] <Soothsayer> kali: can you get a result from the middle of an aggregation pipeline into my application?
[12:26:37] <Soothsayer> before it goes to the next step
[12:30:48] <kali> Soothsayer: nope, you have to discard the steps you don't want and hope you won't break the 16MB limit
[12:31:19] <Soothsayer> kali: so i've to run the aggregation multiple times then?
[13:28:32] <richthegeek> hey, is there anyone here who knows the internal workings of the Node driver? Got an idea for a caching wrapper but I just need to know about how it works
[13:29:13] <richthegeek> so the question is: if I do something like collection.find().sort({column: 1}) ... does that actually do anything on the collection, or does it all occur after I call each() or toArray() or similar?
[13:41:11] <kali> themapplz: paste us your entire mapreduce somewhere
[14:08:52] <kali> the second argument of the emit(), each member of the values array in the reducer, and the return value of the reducer have the same format
[15:20:03] <kurtis> Hey guys, is there a good way to adjust the output of map-reduce data? Specifically, I'd like to pull some values out of the _id subdocument and make them top-level objects (for indexing purposes)
[15:20:35] <kurtis> Or am I limited to just running a .forEach on the collection?
[15:33:50] <bartzy> Asked this in here today but got no answer
[15:34:08] <bartzy> if my field names are ~6GB for a big collection, that means I have less space for actual data in RAM, right ?
[15:35:14] <kali> bartzy: yes. if you have documents with lots of smallish values, it make sense to keep the field names small
[15:35:51] <bartzy> Not so smallish, much bigger than the 20 bytes I'm saving with small fields name
[15:56:24] <kali> cheeser: yeah, there are other avatars of it :)
[15:56:42] <kali> cheeser: from the three digits era
[16:30:42] <blerght_> hi, say I have documents of the following form: { feature: 'age', value: 20, users: [1, 2, 3] }, etc. Now I want to select an array of users aged between 10 and 20. What would be the most efficient way to do this ? (I'm a bit confused between the different ways of aggregating, which one would work best here ?)
[17:08:26] <kali> BlackPanx: what do you want to aggregate ? this is a regular find()
[17:09:03] <kali> BlackPanx: haaa... never mind :)
[17:10:10] <kali> BlackPanx: if you can tolerate to reorganize the data at application level, the find() is the most efficient way. if not, look at the aggregation pipeline
[17:42:58] <jfine> How do I determine a field type in a document?
[17:44:26] <jfine> For instance I have a document with a field unit, and when I do a findOne() it shows as a string (assuming because what I'm seeing is json) but I'm pretty sure they're stored as symbols
[18:29:20] <niftylettuce> built w/mongodb ... https://wakeup.io -- free wakeup call service
[18:36:57] <kzim_> hello is there a way so see if the auth is enable on a shard from the config database ?
[18:52:18] <JakePee_> Is there a rule of thumb for when it's better to return a larger result set and parse the data yourself vs when you should pass a more complex query to mongo
[20:09:23] <TkTech> I'm trying to find a way of getting pymongo to use a specific socket, can't seem to find any easy way of replacing the pool mechanism.
[20:09:42] <TkTech> (I have a socket-like object created by punching through a couple of SSH tunnels that I need pymongo to use)
[20:28:41] <quuxman> From time to time, under very moderate query load, pymongo throws "AutoReconnect: [Errno 104] Connection reset by peer"
[20:28:58] <quuxman> if I restart the process everything seems to work normally again
[20:40:57] <quuxman> nevermind, I was reaching the connection limit
[21:27:16] <scyth> does anyone know how to save ISODate() object from nodejs? In nodejs, date objects are just that - Date() objects, and when they are saved.. they're saved as string, which prevents me from using expireAfterSeconds index option
[21:28:42] <LouisT> you can't convert it back to a date object?
[22:09:40] <calstad> So I'm a complete mongo noob and have a question about how to use the aggregation pipeline that I have outlined here https://gist.github.com/calstad/6817868
[22:26:50] <retran> why is everyone doing cooler things in mongo than me
[22:28:47] <matt1> Heya is ther anything wrong with using mongoengine and pymongo in the same app? mongoengine will handle all user/site related stuff and pymongo the content. I'm using flask if it makes any difference.
[22:58:51] <JFrame> Hey guys, I've been trying out mongodb today and seems it's what i need but i have to do one last thing, when using save (in Java) as a new DBObject, i need to give a key and a value, and the key is like 12345-1, i can search like find( { "12345-1.game_id" : 100 });, how could I find with that "KEY" as a wildcard? like search everyobject that has game_id as 100
[23:53:17] <platzhirsch> That's reasonable, I am just wondering because for my document mapper I can specify a sort order, too... but that's just for the relation :)