PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Friday the 9th of November, 2012

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:00:01] <PedjaM> i have like 1000 q/s at spike
[00:00:03] <eka> PedjaM: not that I know
[00:00:06] <PedjaM> and usually about 700
[00:00:10] <PedjaM> db is about 300G
[00:00:19] <eka> PedjaM: I did it 3 times already... yes I know
[00:00:28] <eka> PedjaM: there is no other way... AFAIK
[00:00:49] <PedjaM> huh, it's not an answer that I wanted to hear ;)))))
[00:00:57] <PedjaM> thanks for your time, really appreciate it...
[00:01:17] <PedjaM> hope that I can pay back a favor somehow
[00:01:17] <eka> Can I change the shard key after sharding a collection? No.
[00:01:20] <eka> quoted
[00:01:28] <eka> http://docs.mongodb.org/manual/faq/sharding/#faq-change-shard-key
[00:01:34] <eka> PedjaM: I've been there
[00:01:39] <eka> all this week
[00:01:51] <PedjaM> yes, well i am more warned about balancer speed than sharding key
[00:01:53] <eka> my DB is not so big but big enough
[00:02:06] <eka> PedjaM: what is the specs on your machine?
[00:02:14] <eka> server
[00:02:23] <eka> are
[00:02:25] <PedjaM> 16G RAM, 16 cores, 600G SSD
[00:02:25] <eka> too late
[00:02:33] <eka> nice machine
[00:02:47] <eka> i have VMs...
[00:02:49] <PedjaM> well, not enough ;)
[00:02:49] <eka> :(
[00:02:54] <eka> PedjaM: big DB
[00:02:58] <PedjaM> i have tried with mongo on VM but got nowhere
[00:03:14] <eka> that's hosted somewhere?
[00:03:20] <eka> if so how much?
[00:03:27] <PedjaM> it was a success at start, but when D grown to ~80G it have become a disaster
[00:03:33] <PedjaM> so i needed dedicated server
[00:03:41] <PedjaM> then it was to slow again and i got SSD
[00:03:45] <eka> PedjaM: you can't recicle data?
[00:03:46] <PedjaM> which was a success
[00:03:53] <PedjaM> but now i again need something better
[00:04:04] <PedjaM> eka not really ;)
[00:04:07] <eka> PedjaM: if you can recicle... you can move out some data... store it
[00:04:17] <PedjaM> i can get it back from backup in case of emergency
[00:04:23] <PedjaM> but i wouyldn;t like to do that ;)
[00:04:33] <eka> PedjaM: but you need all that data live?
[00:04:37] <PedjaM> yes
[00:04:40] <PedjaM> thats the main issue
[00:04:46] <PedjaM> if i can stop all the data processing
[00:04:57] <PedjaM> balancer would move the data relatively fast
[00:05:15] <PedjaM> it have moved most of those 88 chunks during that period when all the background jobs were down
[00:05:32] <PedjaM> now max one chunk per hour
[00:06:02] <PedjaM> i was thinking that if i move down chunk size, it would not need that much locking and it can move faster
[00:06:09] <PedjaM> but there would be too many chunks on other side
[00:06:54] <PedjaM> it seams that i will need o make a maintenance window , stop everything and hope that it will move data around fast enough
[00:07:16] <eka> PedjaM: with that kind of machine I think it will go fast
[00:08:07] <PedjaM> i hope so? ;)
[00:08:33] <eka> PedjaM: have a good ethernet connection btw them?
[00:08:37] <eka> I mean the servers
[00:09:13] <PedjaM> it is gigabit connection
[00:09:30] <eka> and the other machine has the same specs?
[00:10:27] <PedjaM> netperf say 779.79 Mb/s
[00:10:39] <PedjaM> yes, the same specs
[00:10:53] <PedjaM> i have actually 5 of those, but others are for others dbs/collections
[00:11:05] <PedjaM> and don't have this big usage like this one
[00:11:18] <eka> dare to ask what kind of data is
[00:11:59] <PedjaM> well it is just user data, some FB app with about 10M MAU
[00:12:10] <eka> MAU?
[00:12:21] <PedjaM> monthly users avg
[00:12:25] <eka> I see
[00:12:27] <eka> nice
[00:13:01] <PedjaM> have like 3.5K concurrent users and a lot of background processing
[00:13:57] <eka> wow
[00:14:12] <eka> PedjaM: what language is used?
[00:14:28] <eka> programming language
[00:14:28] <PedjaM> english
[00:14:34] <PedjaM> ah, rails mostly
[00:14:39] <eka> I see
[00:14:46] <PedjaM> but there are also ios and android apps
[00:15:16] <PedjaM> backend for those is also on rails
[00:35:13] <eka> PedjaM: you are devops, sysadm or dev?
[00:41:07] <_m> Role separation?! wat? ;)
[01:16:16] <devnill> I lost my admin credentials, how can I reset them?
[01:16:27] <devnill> Im running mongo on a linux server
[01:27:45] <Baribal> devnill, how about deactivating auth and setting new ones?
[02:04:36] <Baribal> MWAHAHAHA, I'm generating 10 million records with random numbers, just to see the effects of indexing!
[02:05:04] <Baribal> BTW, is mongod threaded in any way?
[02:05:21] <skered> Can you use vi key binds in mongo shell 2.0.6?
[02:15:22] <Baribal> Interesting... The index seems to halve the insert rate.
[02:57:25] <frozenlock> I want to keep an history of every changes that occured in say {"key1" data "key2" data}. If key2's data doesn't change really often, is it possible to make it refer to the last known change?
[02:58:12] <frozenlock> For example: {"key1": 100 "key2" "abc"}, {"key1": 101 "key2" "abc"}, {"key1": 105 "key2" "abc"}. Could it be something like {"key1": 100 "key2" "abc"}, {"key1": 101 "key2" _}, {"key1": 105 "key2" _}, where _ will return "abc"?
[03:01:33] <phira> frozenlock: you're probably just asking for trouble, I'd suggest just making a complete copy of the document and storing it in a history collection
[03:01:41] <Baribal> frozenlock, how about adding a revision number?
[03:02:33] <Baribal> {data: 10, revision: 0}, {data:3, revision:0}, {data:5, revision: 2}...
[03:06:12] <frozenlock> Baribal: Hmm could work, but in my case I have mostly 1 value that's changing quite often and a bunch a others that remain 'mostly' static. Adding a revision for each would eat up any saved storage :(
[03:06:53] <frozenlock> phira: As it is that's exaclty what I'm doing, I was just wondering if there was a more efficient way of doing it. My DB is eating diskspace :)
[03:06:57] <frozenlock> nom nom nom
[03:07:04] <Baribal> I just scaled up from 1 to 10 million documents, a query that took 0.5 seconds now takes 3.5 seconds. I thought indexes would scale better?
[03:07:11] <phira> buy more diskspace, it's a much simpler, more reliable solution
[03:07:19] <phira> however, I do understand
[03:07:28] <phira> you can do a kind of reassembly pattern without mongo's support
[03:07:41] <phira> but it's a pain
[03:08:08] <frozenlock> Yeah, I don't want to do that myself... I'm obviously going to mess up somewhere
[03:08:24] <phira> I've done it, it sucked, D-- would not do again
[03:09:15] <Baribal> There's a hitch when using revision numbers, though.
[03:09:33] <Baribal> What if two jobs try to add new versions concurrently?
[03:09:50] <Baribal> Maybe use a timestamp...
[03:10:41] <frozenlock> Or use an _id for each revision :p
[03:10:56] <phira> typically it's a failure even if you use a timestamp. The problem is that a new version of a document is typically derived from an old one, which means you actually have a concurrency conflict anyway - imagine two people changing a word doc and saving a new version, each has made a change based on version A, resulting in versions B and C, but C, even though it's the last one, is not derived from B
[03:11:23] <frozenlock> word...
[03:11:26] <phira> the solution to that is either to declare that this is a feature, or involve some kind of locking, or do changes as a transform.
[03:13:31] <Baribal> Well, MongoDB means no transactions, as AFAIK do most NoSQLs...
[03:13:42] <phira> you don't need a transaction to do that
[03:13:48] <phira> just locking
[03:13:51] <phira> which is a much simpler problem.
[03:13:59] <_m> Dirty writes are a PITA.
[03:14:01] <phira> you can lock using mongodb, no trouble.
[03:14:11] <Baribal> Oh?
[03:14:29] <phira> yeah, anything with an atomic update has the basic tools necessary to generate a lock
[03:14:40] <_m> safe: true
[03:15:09] <Baribal> Ah, I thought you meant that there was already a mechanism in place.
[03:15:23] <phira> an example of that is simply generating a big random number for yourself, then doing update() on the document assigning owner_task: $number, and then checking to see if the number on the document matches yours.
[03:15:39] <phira> where the update only writes if there's no existing one of course
[03:15:46] <phira> that way, if your number is there, you won the lock
[03:16:13] <phira> transactions are a much more difficult problem, but they're not necessary for this problem.
[03:16:47] <_m> Baribal: Basically, if you use set safe to true for a connection, you'll lock during an update. More-or-less.
[03:17:17] <phira> _m: different kind of lock
[03:17:26] <_m> Ahh, okay.
[03:17:37] <_m> Came into the convo half-cocked. I'll see myself out.
[03:17:42] <phira> haha all good
[03:17:51] <Baribal> Wasn't "safe" "only return the call when the update has been written to disk"?
[03:19:10] <Baribal> Anyways, good night.
[03:19:17] <phira> no, safe simply means that the operation will check for errors rather than returning as soon as it has handed off the data to mongo
[03:19:55] <phira> slower, but avoids thinking mongo accepted the data when it actually decided you were a loon
[03:21:10] <Baribal> Got an example of how to provoke a typical error? Driver bugs?
[03:21:32] <phira> I was just pondering that
[03:21:37] <phira> I'm honestly not sure :)
[05:37:01] <the-erm> How do you make the results list longer inside the mongo client? I can scroll I'd like more.
[05:40:17] <the-erm> I guess the command it will continue the query.
[05:54:48] <crudson> the-erm: if you want it to return more or less in each group:
[05:54:48] <crudson> DBQuery.shellBatchSize = xxx
[05:55:15] <the-erm> Does it save it, or do I have to enter it every time?
[05:55:48] <crudson> the-erm: every time you issue a query? no
[05:56:06] <crudson> the-erm: add it to your .mongorc file if you want it to persist across mongo instances
[05:56:25] <the-erm> thanks.
[05:59:50] <crudson> np
[06:17:30] <ChrisPartridge> is it OK to use @ symbol in collection names?
[06:32:14] <crudson> ChrisPartridge: is there a good reason to?
[06:38:37] <the-erm> Is there a better way to get distinct keys in a collection than this? http://stackoverflow.com/questions/2298870/mongodb-get-names-of-all-keys-in-collection
[06:42:47] <crudson> the-erm: seems fine to me, but with questions like this I tend to ask the reason why from a document structure point of view. It's often easier to have known keys and variable values rather than keys being "freeform", if you get where I am coming from. This can manifest itself in many problematic ways down the road.
[06:43:29] <the-erm> I've inherited that kind of code.
[06:45:04] <crudson> the-erm: in that case a map-reduce would be my approach too. emit property values as keys and 'nothing' as value, unless you also want to count the frequency of the keys.
[06:46:41] <samurai2> hi there, how to do something like db.collection.find().sort({ts:-1}).limit(1) in java driver? thanks :)
[06:50:37] <crudson> samurai2: http://api.mongodb.org/java/current/com/mongodb/DBCursor.html - sort and limit are right there on the cursor object
[06:51:37] <samurai2> crudson : thanks :)
[08:41:44] <[AD]Turbo> hi there
[08:55:12] <chovy> is there an easy way to get the most recent item in a collection?
[08:58:57] <crudson> chovy: not *really* - if you use the default _id you could maybe in a non-sharded environment use it but if you want to do this either use a capped collection or add some field that represents insert_date
[09:04:03] <chovy> i am just trying to avoid doing this:
[09:04:17] <chovy> db.items.find({_id: ObjectId('myid')});
[09:04:34] <chovy> its a lot of typing to put that id in there
[09:06:14] <IAD> chovy: you can use this: http://docs.mongodb.org/manual/reference/command/findAndModify/
[09:06:37] <IAD> i mean, you can {$inc : 1} some field
[09:14:21] <chovy> i dont' understand
[10:08:16] <fatninja> I have a collection that has a ISODate() field, and an integer value field. I would like to get all these dates by days, with the total of all the integer value field. Any good approaches on this one ?
[10:41:44] <Baribal> fatninja, You mean you prodive a date (range) and want to get the sum of the corresponding integers?
[10:42:19] <Baribal> I think aggregation would be the hippest way to do that. If you want intermediate results, mapreduce.
[11:28:51] <fatninja> ok, so aggregation might be the solution
[11:36:34] <NodeX> what's the issue you're trying to overcome?
[12:07:20] <visof> hello
[12:07:45] <visof> can mongodb be a good alternative to mysql for huge database ?
[12:07:56] <NodeX> depends on your needs
[12:08:05] <visof> NodeX, how ?
[12:08:18] <NodeX> it's a simple enough statement lol
[12:08:29] <NodeX> it depends on what you need from your database
[12:11:46] <remonvv> visof, your question is roughly similar to asking "Can a truck do what a Ferrari can?" and NodeX asking "Depends, do you want to move people or go 200mph"
[12:12:07] <remonvv> In other words, tell us what you want to do with it. Obviously a NoSQL database isn't going to be a drop in replacement for an RDBMS.
[12:12:11] <NodeX> I have come to the conclusion that CSRF is not 100% preventable :(
[12:12:54] <Zelest> not?
[12:14:15] <NodeX> well certainly mitigating from an exploiter brute forcing aside the obvious number of attmepts
[12:14:20] <NodeX> attmpts*
[12:14:25] <NodeX> attempts*
[12:15:21] <NodeX> but even with token generation an exploiter can always generate said token in another shell and pass that information around their system to send to yours
[12:17:11] <visof> NodeX, is mongodb suitable to work fine with facebook ?
[12:19:51] <NodeX> visof : facebook does not entirely run on mysql - only parts of it]
[12:20:24] <visof> NodeX, so what it run?
[12:21:11] <NodeX> go ask zimmerman ;)
[12:51:21] <jdevelop> Hi all! I have collection os projects and collection of permission for the project, permissions for project holds documents with project id and user id, and some ACLs. Now I want to find all projects which have specific ACL. How do I use map-reduce in order to join 2 collections? Do I need to follow http://tebros.com/2011/07/using-mongodb-mapreduce-to-join-2-collections/ ?
[12:51:43] <jdevelop> or may be newer versions of mongo have something else for achieving this?
[12:53:06] <Gargoyle> jdevelop: It's not relational. Get a list of the project_id's from your permissions colleciton and then use those id's to fetch the projects.
[12:53:35] <jdevelop> Gargoyle: that list could have millions of records
[12:53:50] <jdevelop> not practical to take it into memory
[12:54:21] <Gargoyle> How many users per project?
[12:54:25] <jdevelop> so basically I'm looking for map/reduce way of doing that
[12:54:33] <jdevelop> Gargoyle: 2 or 3
[12:54:58] <jdevelop> but actually I'm looking for single row for a project in permissions
[12:55:00] <Gargoyle> then put the user id's and ACL entries in your projects
[12:55:04] <jdevelop> which identifies the project as "public"
[12:55:32] <jdevelop> Gargoyle: I was thinking about that, but it's huge refactoring
[12:56:36] <Gargoyle> refactoring FTW! :)
[12:57:42] <jdevelop> not on this stage
[13:01:09] <Gargoyle> jdevelop: Well. It sounds like you have a relational like schema in a non relational db, so you are going to struggle. I think map/reduce might help, but you'll still run into memory issues!
[13:03:04] <NodeX> I attacked this kind of problem with groups
[13:04:50] <jdevelop> NodeX: groups?
[13:04:56] <NodeX> yes, user groups
[13:05:41] <NodeX> i/e in the first place I didn't use uid's to assign things to users which made it scalable
[13:36:57] <nopz> Hi, is there a way to stop an indexing operation ?
[13:37:40] <IAD> nopz: yep, slep some hours =)
[13:37:53] <nopz> Ho! fuck :)
[15:00:21] <meghan> fyi, the office office hours for M101 course (mongodb for developers) happening at https://plus.google.com/u/1/101024085748034940765/posts/BukcLAgXWhi
[15:00:45] <Derick> sorry meghan, I'm a drop out :-/
[15:01:03] <NodeX> :P
[15:01:15] <Zelest> http://de2.eu.apcdn.com/full/88312.png :-D
[15:02:56] <meghan> derick is a slacker :-P
[15:03:27] <Derick> :-รพ
[15:09:11] <NodeX> quick tip for anyone using Eway as a payment provider - they are about to go into liquidation so you may want to cancel your accounts and find another provider
[15:21:12] <P-Alex> hi all
[15:21:18] <P-Alex> what is wrong in this query
[15:21:20] <P-Alex> db.videos.update({_id: 1975660}, {$set: {rate.likes: 1}})
[15:21:40] <NodeX> what's the error
[15:22:01] <P-Alex> Fri Nov 9 16:15:58 SyntaxError: missing : after property id (shell):1
[15:23:18] <P-Alex> instead if i use -> db.videos.update({_id: 1975660}, {$set: {rate.likes: 1}}) everything works
[15:23:35] <nopz> 'rate.likes' ?
[15:23:38] <NodeX> what's the difference
[15:23:42] <P-Alex> *db.videos.update({_id: 1975660}, {$set: {rate: 1}})
[15:23:45] <P-Alex> sorry
[15:23:49] <NodeX> quote the dot notation
[15:23:54] <nopz> yes
[15:23:58] <NodeX> 'rate.likes'
[15:24:34] <P-Alex> another error
[15:24:35] <P-Alex> LEFT_SUBFIELD only supports Object: rate not: 1
[15:24:56] <NodeX> db.videos.update({_id: 1975660}, {$set: {"rate.likes": 1}})
[15:25:57] <P-Alex> NodeX, problem solved thx all :)
[15:26:09] <P-Alex> the second error if a sequence of the previous insert
[15:29:09] <wiseguysonly> I've managed to master group and sump, but now I am trying to do something for which I have no clue how to start. I want the sum of a field for each of the last 7 days.
[15:29:37] <wiseguysonly> so 2nd = x, 3rd = x etc etc.
[15:30:18] <stefancrs> monring
[15:32:08] <makin> Hi people I'm very confused with this error: http://stackoverflow.com/questions/13310300/doctrine-mongo-group-query Anyone can help me?
[15:32:45] <stefancrs> sorry, I know this question is slightly off-topic, but I'm developing a restful API with mongodb as the storage. I want to be able to do queries through query parameters but can't really think of a good way to specify values in an embedded object. For example /api/users?location.city=brooklyn&age=34 or whatever. I understand I need to take another aproach, but I'm out of ideas... :)
[15:34:12] <Baribal> Hi again.
[15:35:48] <stefancrs> I _could_ of course just send the actual string for the query, like /api/users?query={"$and" : ["location.city" : "brooklyn", "age" : 34]}, is that the best way forward? I'd prefer if it wasn't THAT mongodb'ified on the API level... :)
[15:39:07] <Baribal> stefancrs, that sounds interesting, could you repeat the beginning of the question?
[15:39:28] <stefancrs> sorry, I know this question is slightly off-topic, but I'm developing a restful API with mongodb as the storage. I want to be able to do queries through query parameters but can't really think of a good way to specify values in an embedded object. For example /api/users?location.city=brooklyn&age=34 or whatever. I understand I need to take another aproach, but I'm out of ideas... :)
[15:39:55] <NodeX> stefancrs : you really should not let people blindly send you queries
[15:39:59] <stefancrs> and oh, sorry, forgot some {} in the query-string version...
[15:40:01] <NodeX> it's a very very bad idea
[15:40:38] <stefancrs> in the latter case, agreed
[15:41:12] <stefancrs> mind you, you can only read / write to data you're authorized to
[15:41:44] <NodeX> I've built a few API's atop mongo and the best way I found was to recieve the payload as json and parse it out into my own language
[15:42:09] <NodeX> like what you have done but with an extra layer of abstraction
[15:42:15] <stefancrs> so like.... you skipped $and etc?
[15:42:41] <NodeX> I dont let bad queries that could potentialy DOS my servers
[15:43:20] <stefancrs> I check authentication before anything else gets executed
[15:43:27] <stefancrs> so that helps :)
[15:43:58] <stefancrs> but I don't think I'll ever need to expose anything but AND'ed operations
[15:44:16] <stefancrs> so I could just have a flat json array I think
[15:45:09] <Baribal> stalled, I think you're going at it the wrong way, the "What can I provide?". First ask yourself what you want or need to provide, then expose that as API and glue it to mongo internally.
[15:45:33] <NodeX> that's what he's doing ;)
[15:45:41] <stefancrs> :)
[15:46:07] <stefancrs> I _know_ I need to be able to do finds on basically any embedded document value
[15:46:24] <stefancrs> through the api
[15:46:39] <stefancrs> but I think I can use $and for all the query parameters
[15:46:50] <stefancrs> so a json array should fit the bill really
[15:51:22] <Baribal> Sounds good.
[15:52:10] <Baribal> What should be the time factor on searching index ranges? O(n), O(log(n))?
[15:55:13] <stefancrs> hm, I could just supply an array directly in the get...
[15:55:20] <stefancrs> does that make sense?
[15:55:41] <Derick> stefancrs: be very careful with that, you can do all kinds of strange things then
[15:55:55] <stefancrs> like /api/users?query['location.city']='brooklyn'&query['age']=34 ?
[15:56:15] <Derick> be aware that you can also do:
[15:56:16] <stefancrs> Derick: how can I do more strange things like that than if it's a json payload with an array in it?
[15:56:41] <Derick> query['location.city']['$ne']= ...
[15:56:53] <stefancrs> is that bad? :)
[15:57:08] <Derick> stefancrs: that was a simple example
[15:57:28] <stefancrs> yeah, but understand that authorization is required and you can only read data you're allowed to read
[15:57:57] <stefancrs> what would be a good example, as in something bad happens?
[16:00:03] <nledon> Hello all. Anyone know why I have a node who's stat is UNKNOWN? I have one PRIMARY who's been initiated and added Mr.UNknown just fine. But it won't become secondary.
[16:04:47] <wiseguysonly> I'm doing the following: http://pastebin.com/YF1apX08 but it seems to bring back the sum of all the docs rather than just for the id specified
[16:05:20] <wiseguysonly> I want to use group really for a sum of playtime for the last x days
[16:05:28] <wiseguysonly> broken down by day
[16:16:30] <jtomasrl> if i have ID's inside objects in an array how can i take those id's and make a search on another collection?
[16:19:24] <nbargnesi> yes
[16:20:29] <stefancrs> nbargnesi: I think you missed a "how" there...
[16:21:18] <nbargnesi> four monitors + a tiling window manager -> wrong IRC channels :/
[16:21:52] <stefancrs> hah
[16:28:06] <Tobsn> someone here using mongoose? their channel is pretty dead
[16:32:36] <IAD> m102 office hours https://www.youtube.com/watch?v=p3ibUe-gqRw
[18:51:51] <JBJB> hey
[18:51:59] <JBJB> Got a Question.
[18:52:16] <JBJB> trying to addtoset into a nested>nested document
[18:53:44] <JBJB> "comments": [
[18:53:44] <JBJB> {
[18:53:45] <JBJB> "comment_id": "00001",
[18:53:45] <JBJB> "title": "Test",
[18:53:45] <JBJB> "comment": "TEST Comment",
[18:53:45] <JBJB> "user_number": "000019",
[18:53:49] <JBJB> "provider_id": "",
[18:53:51] <JBJB> "user_id": "",
[18:53:53] <JBJB> "dateTime": 1351986266,
[18:53:55] <JBJB> "likes": [
[18:53:57] <JBJB> {
[18:53:59] <JBJB> "user_id": "test1",
[18:54:01] <JBJB> "dateTime": 1351986266
[18:54:03] <JBJB> }
[18:54:05] <JBJB> ],
[18:54:07] <JBJB> "flags": [
[18:54:09] <JBJB> {
[18:54:11] <JBJB> "user_id": "",
[18:54:13] <JBJB> "reason": "",
[18:54:15] <JBJB> "dateTime": 1351986266
[18:54:19] <JBJB> }
[18:54:21] <JBJB> ],
[18:54:23] <JBJB> "replies": [
[18:54:25] <JBJB> {
[18:54:27] <JBJB> "user_id": "565454",
[18:54:29] <JBJB> "reply": "Test Reply 1",
[18:54:31] <JBJB> "dateTime": 1351986266
[18:54:33] <JBJB> },
[18:54:35] <JBJB> {
[18:54:37] <JBJB> "user_id": "565454",
[18:54:39] <JBJB> "reply": "Test Reply 2",
[18:54:41] <JBJB> "dateTime": 1351986266
[18:54:43] <JBJB> }
[18:54:45] <JBJB> ]
[18:54:49] <JBJB> },
[18:54:51] <JBJB> trying to add array doc into that replies array
[18:54:53] <JBJB> trying this in PHp and not working
[18:54:55] <JBJB> $r = $commentRec->update(
[18:54:57] <JBJB> array(
[18:54:59] <JBJB> '_id' => new MongoId($provider_id),
[18:55:01] <JBJB> '_object.comments.$.comment_id' => $comment_id
[18:55:03] <JBJB> ),
[18:55:05] <JBJB> array('$addToSet' => array('_object.comments.$.replies' => $obj))
[18:55:07] <JBJB> );
[18:57:51] <ckd> OMG
[18:57:55] <ckd> dude, pastie.
[18:58:14] <NodeX> are you retarded JBJB
[18:58:20] <NodeX> use a pastebin
[18:59:59] <JBJB> People who call other people retarded are usually to some extent, retarded.
[19:00:14] <JBJB> Let me toss this into a pastie
[19:00:38] <ckd> That's the spirit
[19:01:54] <JBJB> http://pastebin.com/SfbQavge Thanks ckd
[19:04:07] <ckd> what's with the $ in '_object.comments.$.comment_id'
[19:04:19] <JBJB> I was trying to experiment with positional
[19:04:37] <JBJB> as the $ can be anynumber dependant on array position.
[19:05:52] <JBJB> need to addtoset to an embeded collection inside an embedded document
[19:06:20] <JBJB> i just pasted the first comment of the record to the pastie
[19:24:53] <ckd> ah, your query's just a little off
[19:24:54] <ckd> one sec
[19:25:15] <JBJB> thinking I may need elemMatch (reading up on it now)
[19:26:18] <ckd> try it with just _object.comments.comment_id
[19:27:56] <ckd> http://pastie.org/5352563 works for me
[19:29:56] <ckd> and actually, depending on how nested that doc is, you probably shouldn't have _object in there either
[19:30:05] <JBJB> array(5) { ["updatedExisting"]=> bool(false) ["n"]=> int(0) ["connectionId"]=> int(956) ["err"]=> NULL ["ok"]=> float(1) }
[19:30:21] <JBJB> didnt fnd the query
[19:30:44] <JBJB> [_object] is the parent array
[19:30:57] <Bilge> Hey JBJB
[19:31:00] <Bilge> You're gay
[19:31:07] <JBJB> thanks
[19:31:28] <JBJB> Bilge I'm quite content, you are correct
[19:31:36] <Bilge> Please shut the fuck up
[19:31:37] <ckd> JBJB: pastebin the entire object, not just part of it
[19:32:06] <JBJB> Bilge, please read your own comment to yourself. Thanks!
[19:33:15] <meghan> let's be civil please :)
[19:35:06] <JBJB> Thnx meghan
[19:37:51] <cedrichurst> geospatial question, i know it's possible to get the list of documents inside a specified polygon
[19:38:21] <cedrichurst> but if my documents contain polygons, is there a way to get the list of documents with polygons surrounding a specified point?
[19:38:42] <cedrichurst> so my query would provide a geopoint
[19:38:47] <ckd> JBJB: are your document ids actually stored like that?
[19:38:58] <cedrichurst> and the results would be all documents with a polygon that the provided point is inside
[20:00:55] <fumduq> I'm having some trouble with replication in which replicas do not catch up to their masters. the LVM module (that supports the mongo directory) crashes in the kernel, and when brought back to life, mongo just gets more and more out of sync
[20:02:40] <fumduq> is there a way for me to determine which operation is causing replication to stop?
[20:56:27] <fumduq> well, got a nice couple of stack traces, and now it says that it's in sync
[20:56:33] <fumduq> whee.
[21:08:27] <doxavore> Does the Mongo ruby driver 1.7.0 not release connections back to the pool in JRuby on Rails?